CN103810502B - A kind of image matching method and system - Google Patents
A kind of image matching method and system Download PDFInfo
- Publication number
- CN103810502B CN103810502B CN201210448015.5A CN201210448015A CN103810502B CN 103810502 B CN103810502 B CN 103810502B CN 201210448015 A CN201210448015 A CN 201210448015A CN 103810502 B CN103810502 B CN 103810502B
- Authority
- CN
- China
- Prior art keywords
- image
- processor
- matched
- training subset
- registered images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
This application discloses a kind of image matching method and system, this method includes:Processor intercepts the training subset of registered images according to registered images preset in database, and statistical classification is carried out to the characteristics of image on the training subset, and is layered according to statistical classification result, to obtain the code word of the training subset;Processor obtains the image to be matched of input and intercepts the image subblock of the image to be matched of input, according to the code word of the characteristics of image in described image sub-block and the training subset, the matching degree of the image to be matched and the registered images is obtained, and whether the match is successful with the registered images according to the matching degree size judgement image to be matched.By the way that the code word of the image subblock of image to be matched and the code word of the training subset of registered images to be carried out to the judgement of corresponding relation in the application, so that it is determined that the matching degree of image to be matched and registered images, the complexity of matching is reduced, amount of calculation and EMS memory occupation is reduced.
Description
Technical field
The application belongs to image processing field, specifically, is related to a kind of image matching method and system.
Background technology
One of major issue in computer vision is how to match the key point in different images, and its is wide
It is general to be applied to the fields such as target detection, Graphic Pattern Matching, image mosaic.
During being matched to the key point of image, it usually needs the key point (Interest of detection image
Point Detection).During the key point of detection image, in order to reduce the number for the key point for needing to match, to subtract
The complexity of small images match, generally carries out extracting the key point of image using some Invariance features.Further, since regarding
Angle, yardstick etc. convert, generally had differences between image, therefore, using indeformable feature when, generally extract with rotation,
The characteristics of image of gray scale, translation and Scale invariant shape is used as indeformable feature, and then describes the image subblock near key point.
In current image matching method, the Scale invariant features transform (Scale proposed by David Lowe
Invariant Feature Transform, SIFT) describe operator more commonly use.In the method, in by being in key point
Sampled in the neighborhood window of the heart, and with the gradient direction of statistics with histogram neighborhood territory pixel, histogrammic peak value represents the key point
Locate the principal direction of neighborhood gradient, i.e., as the direction of the key point, then could be adjusted to eliminate affine transformation according to principal direction
In the difference that causes of rotation, but when being changed greatly for camera angles, the matching performance that SIFT describes operator declines very
Greatly, the foundation of characteristic point is also required to larger amount of calculation, and the complexity of computing is higher, is especially difficult to reach on the mobile apparatus
The requirement handled in real time.
There is a kind of image matching method in addition:Random fern (Random ferns), is not intended to explicit in the method
The local invariant feature of some characteristic point is described, but by training a grader, it is intended to characteristic point is treated as one by one
Object is recognized, but it needs the training process grown very much, while can operationally take substantial amounts of internal memory, is especially had inside
Realize that difficulty is larger on the mobile device such as mobile phone of limit.
The content of the invention
Technical problems to be solved in this application are to provide a kind of image matching method and system, reduction conventional images matching
Complexity, reduce conventional images matching amount of calculation and EMS memory occupation, especially realize on the mobile apparatus realtime graphic match
Processing.
In order to solve the above-mentioned technical problem, this application provides a kind of image matching method, this method includes:
Processor intercepts the training subset of registered images according to registered images preset in database, to the training subset
On characteristics of image carry out statistical classification, and be layered according to statistical classification result, to obtain the code word of the training subset;
Processor obtains the image to be matched of input and intercepts the image subblock of the image to be matched of input, according to the figure
As the characteristics of image and the code word of the training subset in sub-block, of the image to be matched and the registered images is obtained
With degree, and according to the matching degree size judgement image to be matched, whether the match is successful with the registered images.
In order to solve the above-mentioned technical problem, this application provides image matching system, including:Processor and database, institute
Stating is used for preset registered images in database, the processor is used for according to registered images interception registration figure preset in database
The training subset of picture, carries out statistical classification, and divided according to statistical classification result to the characteristics of image on the training subset
Layer, to obtain the code word of the training subset;And for obtaining the image to be matched of input and intercepting the figure to be matched of input
The image subblock of picture, according to the code word of the characteristics of image in described image sub-block and the training subset, obtains described treat
Matching degree with image Yu the registered images, and the image to be matched and the registration are judged according to the matching degree size
Whether the match is successful for image.
Compared with currently existing scheme, the technique effect that the application is obtained is based on by the training subset to registered images
Characteristics of image carries out statistics layering, obtains corresponding code word;In addition, the code word of the image subblock of image to be matched is schemed with registration
The code word of the training subset of picture carries out the judgement of corresponding relation, so that it is determined that the matching degree of image to be matched and registered images.Root
Whether can be matched with comprehensive descision image to be matched and registered images according to the matching degree, so as to reduce answering for conventional images matching
Miscellaneous degree, reduces the amount of calculation and EMS memory occupation of conventional images matching, realtime graphic matching is especially realized on the mobile apparatus
Processing.
Brief description of the drawings
Fig. 1 is the image matching method schematic flow sheet of the embodiment of the present application one;
Fig. 2 is the schematic flow sheet of step 101 in the embodiment of the present application one;
Fig. 3 show image subblock schematic diagram in the embodiment of the present application one;
Fig. 4 is the histogram of pixel value in the embodiment of the present application one;
Fig. 5 is pixel value difference statistics schematic diagram in the embodiment of the present application one;
Fig. 6 is the Gauss histogram of pixel difference in Fig. 5;
Fig. 7 is the histogrammic layering result to pixel value in Fig. 4;
Fig. 8 is the schematic flow sheet of step 104 in the embodiment of the present application one;
Fig. 9 is the schematic flow sheet of step 124 in the embodiment of the present application one;
Figure 10 is the training subset schematic flow sheet that the embodiment of the present application two intercepts registered images;
Figure 11 is the extraction schematic diagram of key point on a certain registered images in the embodiment of the present application two;
Figure 12 is to registering the process schematic that picture carries out rotation transformation in Figure 11;
Figure 13 is is rotated to registering picture in Figure 11, the process schematic of change of scale;
Figure 14 be the embodiment of the present application two on image set extract key point schematic diagram;
Figure 15 is the training subset schematic flow sheet that the embodiment of the present application three intercepts registered images.
Figure 16 is the structural representation of the image matching system of the embodiment of the present application four.
Embodiment
Describe presently filed embodiment in detail below in conjunction with schema and embodiment, thereby how the application is applied
Technological means can fully understand and implement according to this to solve technical problem and reach the implementation process of technology effect.
In following embodiments of the application, characteristics of image is based on by the training subset to registered images and carries out statistical
Layer, obtains corresponding code word;In addition, by the code word of the image subblock of image to be matched and the code word of the training subset of registered images
The judgement of corresponding relation is carried out, so that it is determined that the matching degree of image to be matched and registered images.It can be integrated according to the matching degree
Judge whether image to be matched and registered images match, so as to reduce the complexity of conventional images matching, reduce existing figure
As the amount of calculation and EMS memory occupation of matching, realtime graphic matching treatment is especially realized on the mobile apparatus.
As shown in figure 1, being the image matching method schematic flow sheet of the embodiment of the present application one.In the present embodiment, image
Matching process comprises the following steps:
Preset registered images are to internal memory in step 101, processor reading database;
Step 102, processor intercept the training subset of registered images in internal memory, special to the image on the training subset
Carry out statistical classification is levied, and is layered according to statistical classification result, to obtain the code word of the training subset, and is cached described
The code word of training subset;
In the present embodiment, as shown in Fig. 2 being the schematic flow sheet of step 102 in the embodiment of the present application one, step 102 can
To further comprise:
Step 112, processor intercept the training subset of registered images in internal memory, and to the image on the training subset
Feature carries out statistics and obtains the first statistical result;
In the present embodiment, first statistical result can with but be not limited to the color range figure such as Nogata of the training subset
Figure, as long as characteristics of image can be reacted, and characteristics of image includes but is not limited to pixel value correlation, such as pixel value, pixel
Value difference etc., can corresponding embodiment description as described below.
Specifically, can in characteristics of image progress statistics the first statistical result of acquisition on the step 112 pair training subset
With including:The pixel value of each image subblock same position point in the training subset is counted, the color of the training subset is generated
Rank is schemed, as shown in figure 3, being that A represents a certain image subblock in image subblock schematic diagram in the embodiment of the present application one, figure, such as Fig. 4 institutes
Show, be the histogram of pixel value in the embodiment of the present application one;Or count each image subblock any two in the training subset
The pixel value difference of location point, obtains the color range figure of the training subset;According to the distance with correspondence image sub-block central point, and
The pixel value of each described image sub-block same position point, obtains the color range figure of the training subset.As shown in figure 5, being this Shen
Please any two points A1 and A2 pixel value seek difference on pixel value difference statistics schematic diagram, image subblock A in embodiment one.
Specifically, processor basis and distance of correspondence image sub-block central point in internal memory, and each described image
The pixel value of sub-block same position point, the color range for obtaining the training subset may further include:
First, processor is assigned according to distance of each image subblock same position point with corresponding central point in internal memory
Corresponding weight such as Gauss weight, to obtain template such as Gaussian template;Afterwards, according to every in the template and the training subset
The pixel value of 1 image subblock same position point carries out convolution, obtains the color range figure of the training subset.As shown in fig. 6, being Fig. 5
The Gauss histogram of middle pixel difference, peak represents the pixel value of the central point of image subblock, with central point distance it is more remote,
Pixel value is smaller.Using Gauss histogram because selection reference point is the point i.e. central point week near central point and the reference point
The point enclosed carries out pixel value statistics, therefore, reduces the error that feature point detection position deviation is brought.
In the present embodiment, in above-mentioned steps, the training subset of processor interception registered images can be included by extracting
Key point on registered images, and on the image set for carrying out different distortion processing formation to registered images, intercept the key point
Neighbouring image subblock can be just divided into together so as to form the image subblock in training set, training set for same deformation process
One training subset;Or, different distortion is carried out to registered images and handles different image subsets, is extracted in a certain image subset
Several first key points, image subblock formation training set is taken about in first key point of different images subset;In addition,
First key point contravariant is changed in registered images, the second key point that first key point is corresponded in registered images is found,
According to the distance of the second key point each other, the second key point is classified, such as, is divided into apart from closer
Corresponding first key point of these second key points, is classified as these in same class, and image set afterwards and belongs to same class by one class
Image subblock near first key point is classified as same class training set.Corresponding embodiment description as described below.
In the present embodiment, if color range figure of first statistical result for training subset, to the statistical classification result
It is layered, and corresponding binary code is assigned to every layer of the statistical classification result, obtains the code of the training subset
Word, further comprises:
First, processor is layered according to default pixel threshold to the color range figure of the training subset;
Secondly, processor is by the position of each image subblock in the corresponding training subset of the pixel value of same layer
Point, is assigned to same binary code;
Finally, the corresponding binary code of the location point of each image subblock in the training subset is together in series by processor
Form the code word of the training subset.
Step 122, processor are layered to first statistical result, and are assigned to every layer of first statistical result
Corresponding first binary code is given, to obtain the code word of the training subset.
Fig. 7 is the histogrammic layering result to pixel value in Fig. 4, and transverse axis represents the pixel value of statistics, and pixel value is entered
The result of row layering, such as be divided into three layers, for the longitudinal axis, from top to bottom respectively first layer, the second layer, third layer, first
Layer have three pixel values, then the corresponding location point of these three pixel values is assigned to 11, possibility highest can be represented;Together
Reason, the second layer is assigned to 10, can represent that possibility is placed in the middle;Third layer is assigned to 00, can represent that possibility is minimum.For horizontal stroke
For axle, such as the possibility interval of pixel value is 0-255, if being 8 intervals by the interval division, i.e.,:0~31,32~63,
64~95,96~127,128~159,160~191,192~223,224~255.
And the pixel value counted on amounts to there are 100, that is, there are 100 statistical samples, there are 24 positioned at 0~31,32~
63 have 23, and 64~95 have 24, and 96~127 have 11, and 128~159 have 8, and 160~191 have 5,192~
223 have 3, and 224~255 have 2.It can be seen that, the possibility for being distributed over 0~31,32~63,64~95 is maximum, i.e.,
Pixel value is maximum in these three interval possibilities, if according to three layers of the longitudinal axis point, then positioned at 0~31,32~63,64~
95 are divided into first layer, and are assigned to 11;It is less positioned at 96~127, then 10 are assigned to, positioned at 128~159,160~191,192
~223 are assigned to 00, and these binary code words are together in series as the code word for training combination:11 11 11 10 00 00 00
00。
In an another outer embodiment, for transverse axis, such as the possibility interval of pixel value is 0-255, if should
Interval division is 5 intervals, i.e.,:0~50,51~100,101~150,151~200,201~250.
Such as counting on pixel value has the statistical sample of 100, i.e., 100, is located at, has 30 positioned at 0~50;50~100
There are 32;100~150 have 28;150~200 have 8;200~255 have 2 samples;It is distributed in 0~50,50~100,100
~150 it is most, i.e., pixel value is maximum in the two interval possibilities, if according to three layers of the longitudinal axis point, then positioned at 0~
50th, the corresponding layer of pixel value between 50~100,100~150 is assigned to 11;It is less positioned at 150~200, then 10 are assigned to, is located at
200~255 are assigned to 00, and these binary code words are together in series as the code word for training combination:11 11 11 10 00.
And the pixel value counted on amounts to there are 100, there is 1 positioned at 0~50;50~100 have 40;100~150 have 40
It is individual;150~200 have 10;200~255 have 9 samples;It can be seen that, be distributed in 50~100,100~150 most, i.e. pixel
It is worth in the two interval possibility maximums, if according to three layers of the longitudinal axis point, then between 50~100,100~150
The corresponding layer of pixel value be assigned to 11;It is less positioned at 150~200, then 10 are assigned to, is then assigned to positioned at 0~50,200~255
00, these binary code words are together in series as the code word for training combination:00 11 11 10 10.
These binary codes for belonging to same image subblock are together in series, and form the code word of training subset.
Step 103, processor read the image to be matched that is inputted to internal memory;
Step 104, processor intercept the image subblock of the image to be matched in internal memory, according in described image sub-block
Characteristics of image and the training subset in caching code word, obtain of the image to be matched and the registered images
With degree, and cache the matching degree, judge whether are the image to be matched and the registered images according to the matching degree size
The match is successful.
The number of times of matching is relevant with the quantity of training subset, and the more, the number of times of matching is also just bigger for training subset, than if any
2 training subsets, it is necessary to which matching is twice.
As shown in figure 8, being the schematic flow sheet of step 104 in the embodiment of the present application one.Step 104 can be wrapped further
Include:
The image subblock of step 114, the image to be matched of processor interception input, and described image sub-block is cached in
Deposit, carrying out statistics to the characteristics of image in described image sub-block obtains the second statistical result;
In the present embodiment, second statistical result can be the color range figure such as Nogata of the image subblock of the image to be matched
Figure, and characteristics of image can include but is not limited to be pixel value.
In the present embodiment, before the image subblock of the image to be matched of processor interception input, it can further include:
Image to be matched is inputted to caching and extract the key point of the image to be matched and be cached to internal memory, the image to be matched
Key point includes the point that image information is more than the second predetermined threshold value;Processor intercepts the key of the image to be matched in internal memory
Image subblock near point.
Step 124, processor in internal memory first statistical result layering the number of plies and every layer described first
First binary code of statistical result, is layered to the second statistical result, obtains image subblock in the image to be matched
Code word, and the code word of image subblock in the image to be matched is cached to internal memory.
As shown in figure 9, for the schematic flow sheet of step 124 in the embodiment of the present application one, step 124 can be wrapped further
Include:
Whether pixel value drops into described first in second statistical result of step 1241, processor in internal memory
In the layering of statistical result, in the image subblock of the image to be matched corresponding location point be assigned to one second binary system
Code, the digit of second binary code is equal with the number of plies of first statistical result;
Step 1242, processor are corresponding by the location point of each image subblock in the image to be matched stored in internal memory
The second binary code be together in series, obtain the code word of image subblock in the image to be matched, and cache the figure to be matched
The code word of image subblock is to internal memory as in.
As shown in fig. 7, the first statistical result divide into three layers, if certain pixel value is fallen into second statistical result
Arrive in first layer, then the corresponding location point of the pixel value assigns 111, and is assigned to 000 in other layers.By that analogy, it will scheme
After being handled as the progress of location point in sub-block, it is together in series and obtains the code word of image subblock in the image to be matched.
The code word of the training subset of step 134, processor in internal memory in the image to be matched with scheming
As the code word of sub-block, the matching degree of the image to be matched and the registered images is obtained, and cache the matching degree to internal memory.
In the present embodiment, in step 104, processor according to the matching degree size judge the image to be matched with it is described
Whether the match is successful for registered images, can be realized using Hamming distance calculating method, i.e., step 104 may further include:By institute
The code word code word corresponding with the training subset for stating the image subblock near the key point of image to be matched is carried out and computing, is obtained
To the matching degree of the image to be matched and the registered images.
Such as, pixel value is counted in the second statistical result for 58, it is located at the 0-255 is divided into five intervals
Two interval 50-100, then corresponding code word is then 00 11 00 00 00.Code word 00 11 11 is obtained with above-mentioned corresponding training set
10 10 are carried out and computing, and it is 2 to obtain result, that is, shows that the corresponding location point matching degree of the two pixel values is higher.
Further, step 104 can also include:The matching degree of the processor in RANSAC criterion and internal memory
To determine homography matrix, to judge the transformation relation between the image to be matched and the registered images.That is, by with note
Each image subblock of training subset is compared under volume training set of images, further determines that image to be matched and registered images phase
Than, it there occurs that is deformed, such as dimensional variation, distortion etc..Specific operational formula is as follows:
Homography matrix describes the conversion between two images, it is assumed that p (x, y) is the point in registered images, p ' (m, n)
It is the point in (input picture) to be matched.Homography matrix is mono- 3x3 of H matrix, then had:
Above-mentioned calculating process belongs to prior art, will not be repeated here.
As shown in Figure 10, it is the training subset schematic flow sheet of the interception registered images of the embodiment of the present application two.The present embodiment
In, the training subset of interception registered images can include:
Step 1111a, processor intercept registered images in internal memory and extract the key points of the registered images and be cached to
In internal memory, the key point of the registered images includes the point that image information is more than the first predetermined threshold value;
The key point of the registered images can include point of the image information compared with horn of plenty, and so-called image information is abundant can be with
Refer to that image information is more than the point of the first predetermined threshold value, such as textural characteristics or Gradient Features are more than the first predetermined threshold value
Point.Further, in the present embodiment, it can be, but not limited to affine according to Harris HARRIS or quick FAST or extra large plucked instrument
HESSIAN AFFINE detection methods extract the key point of the registered images.As shown in figure 11, it is in the embodiment of the present application two
The extraction schematic diagram of key point on a certain registered images, at the position for having word, figure image information compared with horn of plenty, so this
Key point is extracted at a little positions.
Step 1112a, processor carry out different deformation process to the registered images, to generate the image that correspondence is different
Subset and composition image set, and be cached in internal memory;
In the present embodiment, deformation process can include but is not limited to for dimensional variation, distortion change, rotationally-varying, translation
At least one of change.Some virtual images obtained after each deformation process can form an image subset, some
One image set of corresponding image subset formation after individual deformation process.Such as, dimensional variation can be 1.2 times of amplification, reduce 0.9
Times etc., then each picture subset has similar running parameter;Such as dimensional variation at 1~1.5 times, 10 degree of rotational deformation~
20 degree, the picture after some deformations in 0 degree~10 degree of torsional deformation constitutes the subset of picture.As shown in figure 12, for Figure 11
Middle registration picture carries out the process schematic of rotation transformation.As shown in figure 13, in Figure 11 register picture rotated, yardstick
The process schematic of conversion, three width pictures of the first row are obtained after rotated deformation process, form an image subset, the
Three width pictures of two rows are obtained after torsional deformation is handled, and form another image subset.
The image subblock that step 1113a, processor intercept described in described image collection near key point in internal memory is obtained
Training set, and be cached in internal memory;
As shown in figure 14, be the schematic diagram that extracts key point in the embodiment of the present application two on image set, key point in
Image subblock A is formed near the corner of " L ", the key point in the range of 5*5.
Described image is concentrated the image subblock in same described image subset to be classified as the instruction by step 1114a, processor
Practice the same training subset concentrated, and be cached in internal memory.
As shown in figure 15, it is the training subset schematic flow sheet of the interception registered images of the embodiment of the present application three.With above-mentioned reality
Apply unlike example two, in the present embodiment, the key point contravariant deformed in post-registration image is changed to before deformation in registered images
Key point, interception registered images training subset can specifically include:
Step 1111b, processor carry out different deformation process to the registered images in internal memory and generate correspondence difference
Image subset formation image set, extract the first key point of different described image subsets, intercept in different described image subsets
Image subblock formation training set near first key point, and be cached in internal memory;
Identical with above-described embodiment two, deformation process can include but is not limited to become for dimensional variation, distortion change, rotation
Change, translate at least one of change.Some images obtained after each deformation process can form an image subset, if
Image subblock formation training set after dry deformation process in corresponding image subset near the first key point.
Step 1112b, processor change to the first key point contravariant in the registered images in internal memory, to determine
Corresponding to the second key point in the registered images, and second key point by mutual distance less than pre-determined distance threshold value
Same class is classified as, and is cached in internal memory;
Registered images have passed through different deformation process, so in registered images after deformation i.e. in image set, image
Informative point is that the first key point is changed afterwards before being deformed.It is unified by the first key point that will be obtained after deformation
The second key point in same registered images is obtained after inverse transformation, is transformed to according to the distance between different second key points distance
Sorted out.
Step 1113b, processor the second key point according to same class determine the first key point described in same class, by institute
There is the image subblock in the image set described in same class near the first key point to be classified as the same training in the training subset
Subset, and be cached in internal memory.
As shown in figure 16, it is the structural representation of the image matching system of the embodiment of the present application four, the image matching system bag
Include:Processor 1601, caching 1602, internal memory 1603 and database 1604, caching 1602 are used for buffered in images match
The data of middle generation are stored to internal memory 1603, and processor 1601 is used for registered images preset in reading database 1604 and arrived
Internal memory 1603, and the training subset of registered images is intercepted, statistical classification, and root are carried out to the characteristics of image on the training subset
Classification results are layered according to statistics, to obtain the code word of the training subset;Processor 1601 is used to intercept treating for input
Image subblock with image, and according to the code word of the characteristics of image in described image sub-block and the training subset, obtain institute
State the matching degree of image to be matched and the registered images;According to the matching degree size judge the image to be matched with it is described
Whether the match is successful for registered images.
In the present embodiment, processor 1601 can include the first processing unit 16011 and second processing device 16012, its
In, the first processing unit 16011 is used for the training subset that registered images are intercepted according to registered images preset in database, to institute
The characteristics of image stated on training subset carries out statistical classification, and is layered according to statistical classification result, to obtain the training
The code word of subset;Second processing device 16012 is used for the image subblock for intercepting the image to be matched of input, according to described image
The code word of characteristics of image and the training subset on block, obtains matching for the image to be matched and the registered images
Degree, and whether the match is successful with the registered images according to the matching degree size judgement image to be matched.
The correspondence above method, can also according to actual needs to increasing other functional unit in image matching system,
This is no longer described in detail.
It will appreciated by the skilled person that said system can also by the way of handling in real time, at this point it is possible to
Save internal memory and caching.
Some preferred embodiments of the application have shown and described in described above, but as previously described, it should be understood that the application
Be not limited to form disclosed herein, be not to be taken as the exclusion to other embodiment, and available for various other combinations,
Modification and environment, and above-mentioned teaching or the technology or knowledge of association area can be passed through in invention contemplated scope described herein
It is modified., then all should be in this Shen and the change and change that those skilled in the art are carried out do not depart from spirit and scope
Please be in the protection domain of appended claims.
Claims (19)
1. a kind of image matching method, it is characterised in that including:
Processor intercepts the training subset of registered images according to registered images preset in database, on the training subset
Characteristics of image carries out statistical classification, and is layered according to statistical classification result, to obtain the code word of the training subset;
Processor obtains the image to be matched of input and intercepts the image subblock of the image to be matched of input, according to described image
The code word of the code word of block and the training subset, obtains the matching degree of the image to be matched and the registered images, and root
Judging the image to be matched according to the matching degree size, whether the match is successful with the registered images.
2. the method as described in claim 1, it is characterised in that processor intercepts the training subset of registered images, to the instruction
The characteristics of image practiced in subset carries out statistical classification, and is layered according to statistical classification result, to obtain the training subset
Code word, further comprise:
Processor intercepts the training subset of registered images, and statistics is carried out to the characteristics of image on the training subset and obtains the first system
Count result;
Processor is layered to first statistical result, and assigns corresponding first to every layer of first statistical result
Binary code, to obtain the code word of the training subset.
3. method as claimed in claim 2, it is characterised in that processor obtains the image to be matched of input and intercepts input
The image subblock of image to be matched, and code word and the code word of the training subset according to described image sub-block obtain described in treat
The matching degree of image and the registered images is matched, is further comprised:
Processor obtains the image to be matched of input and intercepts the image subblock of the image to be matched of input, to described image sub-block
On characteristics of image carry out statistics obtain the second statistical result;
The number of plies and the first binary system of every layer of first statistical result that processor is layered according to first statistical result
Code, is layered to the second statistical result, obtains the code word of image subblock in the image to be matched;
Processor obtains described treat according to the code word of the training subset and the code word of image subblock in the image to be matched
Matching degree with image Yu the registered images.
4. the method as described in claim 1, it is characterised in that processor intercepts the training subset of registered images, further bag
Include:
Processor inputs registered images and extracts the key point of the registered images, and the key point of the registered images includes image
Information is more than the point of the first predetermined threshold value;
Processor carries out different deformation process to the registered images, to generate the different image subset of correspondence and composition image
Collection;
Image subblock described in processor interception described image collection near key point obtains training set;
Processor concentrates described image the image subblock in same described image subset to be classified as the same instruction in the training set
Practice subset.
5. method as claimed in claim 4, it is characterised in that the processor input registered images simultaneously extract the registration figure
The key point of picture, further comprises:
The registration figure is extracted according to the affine HESSIAN AFFINE detection methods of Harris HARRIS or quick FAST or extra large plucked instrument
The key point of picture.
6. method as claimed in claim 4, it is characterised in that the deformation process includes:Dimensional variation, distortion change, rotation
At least one of transformationization, translation change.
7. the method as described in claim 1, it is characterised in that the training subset of processor interception registered images is further wrapped
Include:
Processor carries out different deformation process to the registered images and generates the different image subset formation image set of correspondence, carries
The first key point of different described image subsets is taken, the image subblock near the first key point in different described image subsets is intercepted
Form training set;
Processor changes to the first key point contravariant in the registered images, to determine to correspond in the registered images
Second key point, and second key point that mutual distance is less than pre-determined distance threshold value is classified as same class;
Processor second key point according to same class determines the first key point described in same class, and all described images are concentrated
Image subblock described in same class near the first key point is classified as the same training subset in the training subset.
8. method as claimed in claim 2, it is characterised in that processor is united to the characteristics of image on the training subset
Meter obtains the first statistical result, further comprises:
Processor counts the pixel value of each image subblock same position point in the training subset, generates the training subset
Color range figure.
9. method as claimed in claim 2, it is characterised in that processor is united to the characteristics of image on the training subset
Meter obtains the first statistical result, further comprises:
Processor is according to the pixel of the distance with correspondence image sub-block central point, and each described image sub-block same position point
Value, obtains the color range figure of the training subset.
10. method as claimed in claim 9, it is characterised in that processor according to the distance with correspondence image sub-block central point,
And the pixel value of each described image sub-block same position point, the color range of the training subset is obtained, is further comprised:
Processor assigns corresponding weight, to obtain according to distance of each image subblock same position point with corresponding central point
Template;
Processor carries out convolution according to the pixel value of each image subblock same position point in the template and the training subset,
Obtain the color range figure of the training subset.
11. method as claimed in claim 10, it is characterised in that characterized in that, the weight is Gauss weight, the mould
Plate is Gaussian template.
12. method as claimed in claim 2, it is characterised in that processor is carried out to the characteristics of image on the training subset
Statistics obtains the first statistical result, further comprises:
Processor counts the pixel value difference of each any two positions point of image subblock in the training subset, obtains training
The color range figure of collection.
13. the method as described in claim any one of 8-12, it is characterised in that processor enters according to the statistical classification result
Row layering, and corresponding binary code is assigned to every layer of the statistical classification result, to obtain the code word of the training subset,
Further comprise:
Processor is layered according to default pixel threshold to the color range figure of the training subset;
Processor is located at the location point of each image subblock in the corresponding training subset with the pixel value of layer, is assigned to same two
Ary codes;
The corresponding binary code of the location point of each image subblock in the training subset is together in series to form described by processor
The code word of training subset.
14. method as claimed in claim 13, it is characterised in that the color range figure of the training subset is histogram.
15. method as claimed in claim 3, it is characterised in that the processor obtains the image to be matched of input and intercepted
Before the image subblock of the image to be matched of input, further comprise:
Processor inputs image to be matched and extracts the key point of the image to be matched, the key point bag of the image to be matched
Include the point that image information is more than the second predetermined threshold value;
Processor intercepts the image subblock near the key point of the image to be matched.
16. method as claimed in claim 15, it is characterised in that the number of plies that processor is layered according to first statistical result
And the first binary code of every layer of first statistical result, the second statistical result be layered to obtain the figure to be matched
The code word of image subblock, further comprises as in:
Whether processor is dropped into the layering of first statistical result according to pixel value in second statistical result, to
Corresponding location point is assigned to one second binary code, the position of second binary code in the image subblock of the image to be matched
Number is equal with the number of plies of first statistical result;
Corresponding second binary code of the location point of each image subblock in the image to be matched is together in series by processor, obtains
The code word of image subblock into the image to be matched.
17. the method as described in claim 1, it is characterised in that code word and institute of the processor according to the training subset
The code word of image subblock in image to be matched is stated, the matching degree of the image to be matched and the registered images is obtained, further
Including:
Processor is by the code word of the image subblock near the key point of the image to be matched code corresponding with the training subset
Word is carried out and computing, obtains the matching degree of the image to be matched and the registered images.
18. the method as described in claim 1, it is characterised in that the processor is according to judging the matching degree size
Whether the match is successful with the registered images for image to be matched, further comprises:
Homography matrix is determined according to RANSAC criterion and the matching degree, to judge the image to be matched and the note
Transformation relation between volume image.
19. a kind of image matching system, it is characterised in that including:Processor and database, the database are used for preset registration
Image, the processor is used for the training subset that registered images are intercepted according to registered images preset in database, to the instruction
The characteristics of image practiced in subset carries out statistical classification, and is layered according to statistical classification result, to obtain the training subset
Code word;And for obtain input image to be matched and intercept input image to be matched image subblock, according to described
The code word of code word and the training subset on image subblock, obtains matching for the image to be matched and the registered images
Degree, and whether the match is successful with the registered images according to the matching degree size judgement image to be matched.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210448015.5A CN103810502B (en) | 2012-11-09 | 2012-11-09 | A kind of image matching method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210448015.5A CN103810502B (en) | 2012-11-09 | 2012-11-09 | A kind of image matching method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103810502A CN103810502A (en) | 2014-05-21 |
CN103810502B true CN103810502B (en) | 2017-09-19 |
Family
ID=50707243
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210448015.5A Active CN103810502B (en) | 2012-11-09 | 2012-11-09 | A kind of image matching method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103810502B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109427042B (en) * | 2017-08-25 | 2021-07-30 | 中国科学院声学研究所 | Method for extracting layered structure and spatial distribution of local sea area sedimentary layer |
CN110288639B (en) * | 2019-06-21 | 2020-05-15 | 深圳职业技术学院 | Auxiliary virtual splicing system for computer images |
CN111654666A (en) * | 2020-05-19 | 2020-09-11 | 河南中烟工业有限责任公司 | Tray cigarette material residue identification system and identification method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7099517B2 (en) * | 2000-02-05 | 2006-08-29 | University Of Strathclyde | Codebook search methods |
CN1889089A (en) * | 2006-07-27 | 2007-01-03 | 北京中星微电子有限公司 | Two-dimensional code positioning identifying method and apparatus based on two-stage classification |
CN102096931A (en) * | 2011-03-04 | 2011-06-15 | 中南大学 | Moving target real-time detection method based on layering background modeling |
CN102222235A (en) * | 2010-04-14 | 2011-10-19 | 同济大学 | Object-oriented hyperspectral classification processing method based on object integration height information |
-
2012
- 2012-11-09 CN CN201210448015.5A patent/CN103810502B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7099517B2 (en) * | 2000-02-05 | 2006-08-29 | University Of Strathclyde | Codebook search methods |
CN1889089A (en) * | 2006-07-27 | 2007-01-03 | 北京中星微电子有限公司 | Two-dimensional code positioning identifying method and apparatus based on two-stage classification |
CN102222235A (en) * | 2010-04-14 | 2011-10-19 | 同济大学 | Object-oriented hyperspectral classification processing method based on object integration height information |
CN102096931A (en) * | 2011-03-04 | 2011-06-15 | 中南大学 | Moving target real-time detection method based on layering background modeling |
Non-Patent Citations (2)
Title |
---|
一种基于稀疏编码的多核学习图像分类方法;亓晓振等;《中国期刊全文数据库 电字学报》;20120430;第40卷(第4期);第777页 * |
基于局部灰度值编码的图像匹配;冯宇平等;《中国期刊全文数据库 青岛科技大学学报(自然科学版)》;20110831;第32卷(第4期);第437页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103810502A (en) | 2014-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zeng et al. | WISERNet: Wider separate-then-reunion network for steganalysis of color images | |
CN111639558B (en) | Finger vein authentication method based on ArcFace Loss and improved residual error network | |
CN106919944A (en) | A kind of wide-angle image method for quickly identifying based on ORB algorithms | |
CN107729820A (en) | A kind of finger vein identification method based on multiple dimensioned HOG | |
CN107516316A (en) | It is a kind of that the method that focus mechanism is split to static human image is introduced in FCN | |
CN108090511A (en) | Image classification method, device, electronic equipment and readable storage medium storing program for executing | |
CN103810502B (en) | A kind of image matching method and system | |
CN113808180B (en) | Heterologous image registration method, system and device | |
CN106408597A (en) | Neighborhood entropy and consistency detection-based SAR (synthetic aperture radar) image registration method | |
CN110610174A (en) | Bank card number identification method under complex conditions | |
CN106023187A (en) | Image registration method based on SIFT feature and angle relative distance | |
CN106709500A (en) | Image feature matching method | |
CN110458792A (en) | Method and device for evaluating quality of face image | |
CN110738222A (en) | Image matching method and device, computer equipment and storage medium | |
Nirmal Jothi et al. | Tampering detection using hybrid local and global features in wavelet-transformed space with digital images | |
CN104537381A (en) | Blurred image identification method based on blurred invariant feature | |
Al_azrak et al. | Copy-move forgery detection based on discrete and SURF transforms | |
Tao et al. | Highly efficient follicular segmentation in thyroid cytopathological whole slide image | |
CN112132812B (en) | Certificate verification method and device, electronic equipment and medium | |
Amorim et al. | Analysing rotation-invariance of a log-polar transformation in convolutional neural networks | |
Mahmoud et al. | Copy-move forgery detection using zernike and pseudo zernike moments. | |
Gao et al. | Multiscale dynamic curvelet scattering network | |
Zhu et al. | ID card number detection algorithm based on convolutional neural network | |
CN110197184A (en) | A kind of rapid image SIFT extracting method based on Fourier transformation | |
Lu et al. | Research on image stitching method based on fuzzy inference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1195156 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: GR Ref document number: 1195156 Country of ref document: HK |