CN107392215A - A kind of multigraph detection method based on SIFT algorithms - Google Patents
A kind of multigraph detection method based on SIFT algorithms Download PDFInfo
- Publication number
- CN107392215A CN107392215A CN201710653168.6A CN201710653168A CN107392215A CN 107392215 A CN107392215 A CN 107392215A CN 201710653168 A CN201710653168 A CN 201710653168A CN 107392215 A CN107392215 A CN 107392215A
- Authority
- CN
- China
- Prior art keywords
- mrow
- image
- key point
- msup
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Abstract
A kind of automatic mode of the electric business website repetitive picture detection based on SIFT algorithms, 1) to needing picture to be processed to carry out sectional drawing pretreatment on website, only preserve picture up and down 15% 85% section;2) metric space is built, by carrying out change of scale to image, the metric space obtained under Image Multiscale represents sequence;3) crucial point location, on the position of each candidate, position and yardstick are determined by the fine model of a fitting;4) key point direction determines, the gradient direction based on image local, distributes to each key point position one or more direction;5) key point is described, and in the neighborhood around each key point, the gradient of image local is measured on selected yardstick;6) crucial matching, by the describer in two figures compare two-by-two find out be mutually matched it is some to characteristic point;7) Similarity Measure, by self-defined picture calculating formula of similarity, repetitive picture is determined whether.
Description
Technical field
The present invention relates to field of image detection, in particular to it is a kind of based on SIFT algorithms to e-commerce website
The method that repetitive picture is detected.
Background technology
In the commodity market of competition, the businessmans of some e-commerce websites (such as made in China net) in order to
Reach increase product flow and the purpose of sales volume, repeat to submit identical product, that is, repeat paving goods.Generally, e-commerce website
Businessman can be limited to repeat to spread the behavior of goods, usual website is defined as below for repeating paving goods:Identical and commodity weights
The identical commodity of attribute are wanted, only allow to use a kind of sale mode, issue is once.Rule more than violating, you can be determined as
Repeat to issue;For different commodity, it is necessary to embody the difference of commodity in the title of commodity, description, picture etc., otherwise will
It is judged as repeating paving goods.Typically to the detection for repeating commodity:First, brand and model can not be the same;Second, title, business
Product description can not be the same;3rd, picture can not be the same.
Judgement wherein for repetitive picture is a technological difficulties, by taking made in China net as an example, repeats to spread the businessman of goods
In order to avoid the detection of website, the repetitive picture generally uploaded is not completely the same, but it has been carried out such as picture scaling, taken
Local, fuzzy, watermark etc. are handled.In order to tackle the more complicated picture environment of electric business website, in the past for repetitive picture
Judge generally by business personnel's artificial detection, but as the lifting of website traffic and a large amount of of commodity increase newly, artificial detection
Method will certainly expend substantial amounts of cost of labor.Existing sift is the classical algorithm of images match, the matching of SIFT algorithms
(Scale-invariant feature transform, scale invariant feature conversion) is that a kind of algorithm of computer vision is used for
Detecting and the locality characteristic in description image, it finds extreme point in space scale, and extracts its position, yardstick, rotation
Turn invariant, this algorithm was delivered by David Lowe in 1999, is improved within 2004 and is summarized.Application includes object at present
Identification, robot map perceive to be compared with navigation, image suture, the foundation of 3D models, gesture identification, image tracing and action.
The content of the invention
The present invention seeks to the similarity for giving repetitive picture judges section, is finally reached the effect of automatic identification multigraph
Fruit, and then provide foundation to judge whether businessman repeats paving goods.So that the operational excellence of e-commerce website, to website browsing by
Crowd provides pure and fresh unduplicated environment.
The technical scheme is that:A kind of automation side of the electric business website repetitive picture detection based on SIFT algorithms
Method, particular content include:
1st, image sectional drawing is handled
Because part made in China net businessman can add word and logo in its product picture upper and lower ends, therefore apply
Need first to carry out image preprocessing before SIFT.
2nd, metric space is built
By carrying out change of scale to image, obtain the metric space under Image Multiscale and represent sequence, and search for all
Picture position on yardstick.The potentially key point for yardstick and invariable rotary is generally identified by gaussian derivative function.
3rd, crucial point location
On the position of each candidate, position and yardstick are determined by the fine model of a fitting.The choosing of key point
Select and be dependent on their degree of stability.
4th, key point direction determines
Gradient direction based on image local, distribute to each key point position one or more direction.Behind all
Operation to view data enters line translation both relative to the direction of key point, yardstick and position, so as to provide for these conversion
Consistency.
5th, key point describes
In the neighborhood around each key point, the gradient of image local is measured on selected yardstick.These gradient quilts
It is transformed into a kind of expression, this deformation and the illumination variation for representing to allow bigger local shape.
6th, crucial Point matching
Comparing to find out two-by-two and be mutually matched some to characteristic point by the describer in two figures, also just establish picture
Between corresponding relation.
7th, Similarity Measure
Judge section by self-defined picture calculating formula of similarity, and by the similarity of given repetitive picture, sentence
Whether disconnected is repetitive picture.
Beneficial effect, the present invention make full use of the description of the local image feature of SIFT algorithms to help to recognize with detecting
Object, SIFT feature are the points of interest based on some local appearances on object and unrelated with the size of image and rotation.For
Light, noise, the tolerance of visual angle change slightly are also at a relatively high.Characteristic highly significant and relatively easily extract, it is huge in female number
In big property data base, it is easy to recognize object and misidentification is low.Covered using SIFT feature description for fractional object
Detecting rate is also at a relatively high, or even only needs the SIFT object features of more than 3 to be just enough to calculate out position and orientation.In today
Under the conditions of property data base, identification speed is close to real-time operation.Containing much information for SIFT feature, is adapted in high-volume database
Quick and precisely match.The present invention is exactly by carrying out quantitative calculating to the similarity of picture, and by taking made in China net as an example, pass through
Calculating to a large amount of similar pictures, the similarity for giving repetitive picture judge section, are finally reached the effect of automatic identification multigraph
Fruit, and then provide foundation to judge whether businessman repeats paving goods.A utility is provided to business website.
Brief description of the drawings
Fig. 1 is the flow chart of the embodiment of the present invention;
Fig. 2 is the schematic diagram of the gaussian pyramid model of image of the present invention;
Fig. 3 is that DOG operators of the present invention build DOG pyramid schematic diagrames.
Fig. 4 to Fig. 7 is the design sketch of the present invention;There are four embodiments, the corresponding tool of each embodiment in each figure
The result of body duplicate checking.
Embodiment
Below in conjunction with the accompanying drawings, specific embodiments of the present invention are described in further detail.
As shown in fig.1, the implementation steps of the present invention are as follows:
S11:Image preprocessing
Because the online part businessman of made in China can add word and logo in its product picture upper and lower ends, and image master
Body is that product introduction part is concentrated mainly on center picture, therefore before application SIFT, need to first do sectional drawing processing, passes through reality
Test, we only preserve the section of 15%-85% above and below picture.
S12:The expression of metric space
SIFT algorithms are that key point is searched on different metric spaces, and the acquisition of metric space needs to use Gaussian mode
Paste to realize.The metric space L (x, y, σ) of one image, 2 dimensions for being defined as original image I (x, y) and a variable dimension are high
This function G (x, y, σ) convolution algorithm.
Wherein,(x, y) is space coordinates, and σ is yardstick coordinate.σ sizes are determined
Determine the smoothness of image, the general picture feature of large scale correspondence image, the minutia of small yardstick correspondence image.
S13:The structure of gaussian pyramid
Metric space represents that the structure of gaussian pyramid is divided into two parts when realizing using gaussian pyramid:
1. pair image does the Gaussian Blur of different scale;
2. a pair image does down-sampled.
The pyramid model of image refers to, the continuous depression of order of original image is sampled, obtains a series of images not of uniform size,
Tower-like model that is descending, forming from top to bottom.Original image is the first layer of pyramid, every time down-sampled resulting new figure
Picture is pyramidal one layer (every layer of image), and each pyramid is total to n-layer.The pyramidal number of plies is according to the original size of image
Together decided on the size of tower top image, its calculation formula is as follows:N=log2{min(M,N)}-t,t∈[0,log2{min(M,
N) }], wherein M, N are the size of original image, and t is the logarithm value of the minimum dimension of tower top image.It is continuous in order to allow yardstick to embody its
Property, add gaussian filtering on the basis of simple down-sampling.Piece image can produce several groups of (octave) images, one group of figure
As including several layers of (interval) images.
S14:DoG difference of Gaussian pyramids
The detection of all key points is all based on the characteristic of Scale invariant, and the LoG operators of yardstick standardization have really
Scale invariability.
LoG operators are (Laplacion of Gaussian), can be built by Gaussian function gradient operator GOG:
And the relation of LOG operators and gaussian kernel function is:
G(x,y,kσ)-G(x,y,σ)≈(k-1)σ2▽2G
There is direct relation by deriving the difference that can be seen that LOG operators and gaussian kernel function, thus introduce a kind of new calculation
Sub- DOG (Difference of Gaussians), i.e. difference of Gaussian:
D (x, y, σ)=[G (x, y, k σ)-G (x, y, σ)] * I (x, y)
=L (x, y, k σ)-L (x, y, σ)
So DOG pyramids can be built by DOG operators.
S15:Crucial point location
In order to find the extreme point of metric space, the consecutive points that each sampled point will be all with it compare, and whether see it
Than it image area and scale domain consecutive points it is big or small.Middle test point and it with yardstick 8 consecutive points and up and down
Totally 26 points compare 9 × 2 points corresponding to adjacent yardstick, to ensure all to detect extreme value in metric space and two dimensional image space
Point.If a point is maximum or minimum value in DOG this layer of metric space and bilevel 26 fields, it is considered as
The point is a characteristic point of the image under the yardstick.
In ratio of extreme values compared with during, the first and last of each group of image can not carry out ratio of extreme values compared with order to meet two layers
The continuity of dimensional variation, we continue to generate 3 width images, gaussian pyramid with Gaussian Blur in the top layer of each group of image
There is every group of S+3 tomographic image, every group of DOG pyramids there are S+2 tomographic images.
S16:Key point direction determines
Extreme point is asked by scale invariability, can make it have and scale constant property.In order that descriptor has rotation
Turn consistency, it is necessary to distribute a reference direction to each key point using the local feature of image.The mould of (x, y) place gradient
Value and direction formula are:
Gradient magnitude:
Gradient direction:
After the gradient calculation of key point is completed, gradient and direction using pixel in statistics with histogram neighborhood.Gradient is straight
0~360 degree of direction scope is divided into 36 posts (bins) by square figure, wherein per 10 degree of post.The peak value direction of histogram represents
The principal direction of key point.
S17:Key point describes
The purpose of description is after key point calculating, and this key point is depicted come this description with one group of vector
Not only include key point, also including around key point to its contributive pixel.The foundation of object matching is used as, also may be used
Make key point that there are more invariant features, such as illumination variation, 3D viewpoints change.Key point description is broadly divided into 3 steps:
1st, principal direction is rotated:Reference axis is rotated to be to the direction of key point, to ensure rotational invariance
2nd, generation description:Produce 128 data for key point, that is, ultimately form the SIFT features of 128 dimensions to
Amount
3rd, normalized:By the length normalization method of characteristic vector, then the influence of illumination variation can be further removed
S18:Crucial Point matching
Figure (observation chart, the observation to Prototype drawing (reference chart, reference image) and in real time respectively
Image) establish key point and describe subclass.The key point in two point sets that is identified by of target describes sub comparison to complete.
The similarity measurement of key point description with 128 dimensions uses Euclidean distance.
Key point description in Prototype drawing:Ri=(ri1,ri2,…,ri128)
Key point description in real-time figure:Si=(si1,si2,…,si128)
Any two describe sub- similarity measurement:
In order to exclude because image blocks the key point with background clutter and caused no matching relationship, d (Ri,Sj) need to expire
Foot:
According to the picture test result on made in China net website, it is 0.7 that can take threshold value
S19:The calculating of similarity
The similarity that we can define two figures is:
Prototype drawing and in real time figure are distinguished in view of SIFT, we can exchange the position of two figures, calculate two times result
Weighted value:
According to the picture test result on made in China net website, it is 0.1 that can take threshold value:
Work as S>During Threshold:Multigraph
Work as S<During Threshold:Non- multigraph.
As shown in Fig. 2 the Gauss that the image for being located at identical group of (octave) different layers (interval) passes through different scale
Filtering is obtained, and the image of different groups (octave) is obtained, it is necessary to illustrate by down-sampled, and the bottom of upper one group of image is by preceding
The down-sampled generation of tomographic image second from the bottom of one group of image.In order to allow yardstick to embody its continuity, on the basis of down-sampling
Add gaussian filtering;Piece image produces several groups of (octave) images, and one group of image includes several layers of (interval) images;
By DOG operators in Fig. 3, DOG pyramids are built.It is as shown below, i.e., computationally only need adjacent yardstick Gauss
Image subtraction after smooth, therefore simplify calculating.
Those of ordinary skills in the art should understand that:The specific embodiment of the present invention is the foregoing is only, and
The limitation present invention is not used in, within the spirit and principles of the invention, any modification, equivalent substitution and improvements done etc.,
It should be included within protection scope of the present invention.
Claims (6)
- A kind of 1. automatic mode of the electric business website repetitive picture detection based on SIFT algorithms, it is characterized in that step is as follows:1) to needing picture to be processed to carry out sectional drawing pretreatment on website, the section of 15%-85% above and below picture is only preserved;2) metric space is built, by carrying out change of scale to image, the metric space obtained under Image Multiscale represents sequence, And the picture position searched on all yardsticks;The potentially pass for yardstick and invariable rotary is identified by gaussian derivative function Key point;3) crucial point location, on the position of each candidate, position and yardstick are determined by the fine model of a fitting;Close The selection gist of key point is in their degree of stability;4) key point direction determines, the gradient direction based on image local, distributes to each key point position one or more side To.Operation to view data behind all enters line translation both relative to the direction of key point, yardstick and position, so as to provide For the consistency of these conversion;5) key point is described, and in the neighborhood around each key point, the gradient of image local is measured on selected yardstick;This A little gradients are transformed into a kind of expression, this deformation and the illumination variation for representing to allow bigger local shape;6) crucial Point matching, by the describer in two figures compare two-by-two find out be mutually matched it is some to characteristic point, also Establish the corresponding relation between picture;7) Similarity Measure, sentence by self-defined picture calculating formula of similarity, and by the similarity of given repetitive picture Determine section, determine whether repetitive picture.
- 2. according to the method for claim 1, it is characterized in that step 1) -2) comprise the following steps that:S12:The expression of metric spaceSIFT algorithms search key point on different metric spaces, and the acquisition of metric space is realized using Gaussian Blur;One The metric space L (x, y, σ) of individual image, be defined as original image I (x, y) and variable dimension two-dimensional Gaussian function G (x, Y, σ) convolution algorithm:Wherein,(x, y) is space coordinates, and σ is yardstick coordinate;σ sizes determine figure The smoothness of picture, σ is bigger, and image is fuzzyyer, and the minutia of image is reflected when σ is small, and image is reflected when σ is big General picture feature for σ span it should be noted that have no strict demand, and σ section is [0.5,1.6] in the present invention, 0.5 and 1.6 have corresponded to most clear and most fuzzy yardstick value respectively;S13:The structure of gaussian pyramid model, metric space represented when realizing using gaussian pyramid, gaussian pyramid Structure is divided into two parts:The Gaussian Blur of different scale 1-1) is done to image, is also referred to as gaussian filtering;1-2) image is done down-sampled;The gaussian pyramid model of image refers to, the continuous depression of order of original image is sampled, obtains a series of images not of uniform size, Tower-like model that is descending, forming from top to bottom;Original image is the first layer of pyramid, every time down-sampled resulting new figure Picture is pyramidal one layer (every layer of image), and each pyramid is total to n-layer;The pyramidal number of plies is according to the original size of image Together decided on the size of tower top image, its calculation formula is as follows:N=log2{min(M,N)}-t,t∈[0,log2{min(M, N) }], wherein M, N are the size of original image, and t is the logarithm value of the minimum dimension of tower top image;S14:DOG pyramid difference of GaussianThe detection of all key points is all based on the characteristic of Scale invariant, and the LOG operators of yardstick standardization have real yardstick Consistency;LoG operators are (Laplacion of Gaussian), are built by Gaussian function gradient operator GOG:<mrow> <msup> <mo>&dtri;</mo> <mn>2</mn> </msup> <mi>G</mi> <mo>=</mo> <mfrac> <mrow> <msup> <mo>&part;</mo> <mn>2</mn> </msup> <mi>G</mi> </mrow> <mrow> <mo>&part;</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msup> <mo>&part;</mo> <mn>2</mn> </msup> <mi>G</mi> </mrow> <mrow> <mo>&part;</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> </mfrac> </mrow>And the relation of LOG operators and gaussian kernel function is:<mrow> <msup> <mo>&dtri;</mo> <mn>2</mn> </msup> <mi>G</mi> <mo>=</mo> <mfrac> <mrow> <msup> <mo>&part;</mo> <mn>2</mn> </msup> <mi>G</mi> </mrow> <mrow> <mo>&part;</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msup> <mo>&part;</mo> <mn>2</mn> </msup> <mi>G</mi> </mrow> <mrow> <mo>&part;</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> </mfrac> </mrow><mrow> <mi>L</mi> <mi>O</mi> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <msup> <mo>&dtri;</mo> <mn>2</mn> </msup> <mi>G</mi> <mo>&ap;</mo> <mfrac> <mrow> <mi>G</mi> <mi>a</mi> <mi>u</mi> <mi>s</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>k</mi> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>G</mi> <mi>a</mi> <mi>u</mi> <mi>s</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow><mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>k</mi> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>&ap;</mo> <mrow> <mo>(</mo> <mi>k</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <msup> <mo>&dtri;</mo> <mn>2</mn> </msup> <mi>G</mi> </mrow>By deriving, the difference of LOG operators and gaussian kernel function has direct relation, directly complex using the calculating of LOG operators, by This introduces a kind of new operator DOG (Difference of Gaussians), i.e. difference of Gaussian:<mrow> <mtable> <mtr> <mtd> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>&lsqb;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>k</mi> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> <mo>)</mo> </mrow> <mo>&rsqb;</mo> <mo>*</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mi>L</mi> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>k</mi> <mi>&sigma;</mi> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>L</mi> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>&sigma;</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>.</mo> </mrow>
- 3. according to the method for claim 1, it is characterized in that step 3) -6) it is as follows:S15:Crucial point locationWhether, in order to find the extreme point of metric space, the consecutive points that each sampled point will be all with it compare, see it than it Image area and scale domain consecutive points it is big or small;Middle test point and it with 8 consecutive points of yardstick and neighbouring Totally 26 points compare 9 × 2 points corresponding to yardstick, to ensure all to detect extreme point in metric space and two dimensional image space; If a point is maximum or minimum value in DOG this layer of metric space and bilevel 26 fields, this is considered as Point is a characteristic point of the image under the yardstick;S16:Key point direction determinesExtreme point is asked by scale invariability, can make it have and scale constant property;In order that descriptor has rotation not Denaturation to each key point using the local feature of image, it is necessary to distribute a reference direction;The modulus value of (x, y) place gradient and Direction formula is:Gradient magnitude:Gradient direction:S17:Key point describesThe purpose of description is after key point calculating, and this key point is depicted come this description is not only with one group of vector Including key point, also including around key point to its contributive pixel;The foundation of object matching is used as, makes key point Change with more invariant features, including illumination variation, 3D viewpoints;Key point description is divided into 3 steps:17-1) rotate principal direction:Reference axis is rotated to be to the direction of key point, to ensure rotational invariance17-2) generation description:128 data are produced for a key point, that is, ultimately form the SIFT feature vector of 128 dimensions17-3) normalized:By the length normalization method of characteristic vector, then the influence of illumination variation is further removedS18:Crucial Point matching, are established by key point and is retouched for Prototype drawing and real-time figure (observation chart, observation image) respectively State subclass;The key point in two point sets that is identified by of target describes sub comparison to complete;Key point with 128 dimensions is retouched The similarity measurement for stating son uses Euclidean distance;Key point description in Prototype drawing:Ri=(ri1,ri2,…,ri128)Key point description in real-time figure:Si=(si1,si2,…,si128)Wherein Ri、Si(vector that generally use 128 is tieed up comes description of i-th of key point of expression Prototype drawing and real-time figure respectively Description), ri、siComponent in respectively 128 dimension description each dimensions of subvector;Any two describe sub- similarity measurement:In order to exclude because image blocks the key point with background clutter and caused no matching relationship, d (Ri,Sj) need to meet:S19:The calculating of similarityThe similarity that we can define two figures is:Prototype drawing and in real time figure are distinguished in view of SIFT, we can exchange the position of two figures, calculate the weighting of two times result Value:Work as S>During Threshold:Multigraph;Work as S<During Threshold:Non- multigraph.
- 4. according to the method for claim 1, it is characterized in that in step S15:In ratio of extreme values compared with during, each group of image First and last can not carry out for two layers ratio of extreme values compared with, in order to meet the continuity of dimensional variation, each group of image top layer after Continuous to generate 3 width images with Gaussian Blur, gaussian pyramid has every group of S+3 tomographic image, and every group of DOG pyramids have S+2 tomographic images.
- 5. according to the method for claim 1, it is characterized in that in step S16, after the gradient calculation of key point is completed, use The gradient of pixel and direction in statistics with histogram neighborhood;0~360 degree of direction scope is divided into 36 posts by histogram of gradients (bins), wherein per 10 degree of post;The peak value direction of histogram represents the principal direction of key point.
- 6. according to the method for claim 1, it is characterized in that it is 0.7 that threshold value is taken in step S18;Threshold value is 0.1 in S19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710653168.6A CN107392215A (en) | 2017-08-02 | 2017-08-02 | A kind of multigraph detection method based on SIFT algorithms |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710653168.6A CN107392215A (en) | 2017-08-02 | 2017-08-02 | A kind of multigraph detection method based on SIFT algorithms |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107392215A true CN107392215A (en) | 2017-11-24 |
Family
ID=60344707
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710653168.6A Pending CN107392215A (en) | 2017-08-02 | 2017-08-02 | A kind of multigraph detection method based on SIFT algorithms |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107392215A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108182205A (en) * | 2017-12-13 | 2018-06-19 | 南京信息工程大学 | A kind of image method for quickly retrieving of the HASH algorithms based on SIFT |
CN108319958A (en) * | 2018-03-16 | 2018-07-24 | 福州大学 | A kind of matched driving license of feature based fusion detects and recognition methods |
CN108960251A (en) * | 2018-05-22 | 2018-12-07 | 东南大学 | A kind of images match description generates the hardware circuit implementation method of scale space |
CN108960280A (en) * | 2018-05-21 | 2018-12-07 | 北京中科闻歌科技股份有限公司 | A kind of picture similarity detection method and system |
CN109102036A (en) * | 2018-09-26 | 2018-12-28 | 云南电网有限责任公司电力科学研究院 | A kind of image tagged method and device for transmission line malfunction identification |
CN109344710A (en) * | 2018-08-30 | 2019-02-15 | 东软集团股份有限公司 | A kind of localization method of image characteristic point, device, storage medium and processor |
CN109697406A (en) * | 2018-11-09 | 2019-04-30 | 广西电网有限责任公司河池供电局 | One kind being based on unmanned plane image intelligent analysis method |
CN110188217A (en) * | 2019-05-29 | 2019-08-30 | 京东方科技集团股份有限公司 | Image duplicate checking method, apparatus, equipment and computer-readable storage media |
CN111582306A (en) * | 2020-03-30 | 2020-08-25 | 南昌大学 | Near-repetitive image matching method based on key point graph representation |
CN111709917A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Label-based shape matching algorithm |
CN111739003A (en) * | 2020-06-18 | 2020-10-02 | 上海电器科学研究所(集团)有限公司 | Machine vision algorithm for appearance detection |
CN111985500A (en) * | 2020-07-28 | 2020-11-24 | 国网山东省电力公司禹城市供电公司 | Method, system and device for checking relay protection constant value input |
CN112634130A (en) * | 2020-08-24 | 2021-04-09 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator |
CN116433887A (en) * | 2023-06-12 | 2023-07-14 | 山东鼎一建设有限公司 | Building rapid positioning method based on artificial intelligence |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101446980A (en) * | 2008-12-26 | 2009-06-03 | 北京大学 | Tridimensional partial shape match and retrieval method based on color rotation picture |
CN101561866A (en) * | 2009-05-27 | 2009-10-21 | 上海交通大学 | Character recognition method based on SIFT feature and gray scale difference value histogram feature |
CN102663431A (en) * | 2012-04-17 | 2012-09-12 | 北京博研新创数码科技有限公司 | Image matching calculation method on basis of region weighting |
CN102930292A (en) * | 2012-10-17 | 2013-02-13 | 清华大学 | Object identification method based on p-SIFT (Scale Invariant Feature Transform) characteristic |
US8441489B2 (en) * | 2008-12-31 | 2013-05-14 | Intel Corporation | System and method for SIFT implementation and optimization |
CN104899834A (en) * | 2015-03-04 | 2015-09-09 | 苏州大学 | Blurred image recognition method and apparatus based on SIFT algorithm |
-
2017
- 2017-08-02 CN CN201710653168.6A patent/CN107392215A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101446980A (en) * | 2008-12-26 | 2009-06-03 | 北京大学 | Tridimensional partial shape match and retrieval method based on color rotation picture |
US8441489B2 (en) * | 2008-12-31 | 2013-05-14 | Intel Corporation | System and method for SIFT implementation and optimization |
CN101561866A (en) * | 2009-05-27 | 2009-10-21 | 上海交通大学 | Character recognition method based on SIFT feature and gray scale difference value histogram feature |
CN102663431A (en) * | 2012-04-17 | 2012-09-12 | 北京博研新创数码科技有限公司 | Image matching calculation method on basis of region weighting |
CN102930292A (en) * | 2012-10-17 | 2013-02-13 | 清华大学 | Object identification method based on p-SIFT (Scale Invariant Feature Transform) characteristic |
CN104899834A (en) * | 2015-03-04 | 2015-09-09 | 苏州大学 | Blurred image recognition method and apparatus based on SIFT algorithm |
Non-Patent Citations (4)
Title |
---|
史小雨: ""基于SIFT特征和结构相似性的图像质量评价"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
张闻宇 等: ""基于CenSurE-star特征的无人机景象匹配算法"", 《仪器仪表学报》 * |
李刚 等: ""基于双向SIFT的未标定图像的立体匹配"", 《计算机工程与应用》 * |
王其浩 等: ""用于图像购物搜索的局部匹配算法"", 《计算机工程与应用》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108182205A (en) * | 2017-12-13 | 2018-06-19 | 南京信息工程大学 | A kind of image method for quickly retrieving of the HASH algorithms based on SIFT |
CN108319958A (en) * | 2018-03-16 | 2018-07-24 | 福州大学 | A kind of matched driving license of feature based fusion detects and recognition methods |
CN108960280A (en) * | 2018-05-21 | 2018-12-07 | 北京中科闻歌科技股份有限公司 | A kind of picture similarity detection method and system |
CN108960280B (en) * | 2018-05-21 | 2020-07-24 | 北京中科闻歌科技股份有限公司 | Picture similarity detection method and system |
CN108960251A (en) * | 2018-05-22 | 2018-12-07 | 东南大学 | A kind of images match description generates the hardware circuit implementation method of scale space |
CN109344710A (en) * | 2018-08-30 | 2019-02-15 | 东软集团股份有限公司 | A kind of localization method of image characteristic point, device, storage medium and processor |
CN109102036A (en) * | 2018-09-26 | 2018-12-28 | 云南电网有限责任公司电力科学研究院 | A kind of image tagged method and device for transmission line malfunction identification |
CN109697406A (en) * | 2018-11-09 | 2019-04-30 | 广西电网有限责任公司河池供电局 | One kind being based on unmanned plane image intelligent analysis method |
US11886492B2 (en) | 2019-05-29 | 2024-01-30 | Boe Technology Group Co., Ltd. | Method of matching image and apparatus thereof, device, medium and program product |
CN110188217A (en) * | 2019-05-29 | 2019-08-30 | 京东方科技集团股份有限公司 | Image duplicate checking method, apparatus, equipment and computer-readable storage media |
CN111582306A (en) * | 2020-03-30 | 2020-08-25 | 南昌大学 | Near-repetitive image matching method based on key point graph representation |
CN111709917A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Label-based shape matching algorithm |
CN111709917B (en) * | 2020-06-01 | 2023-08-22 | 深圳市深视创新科技有限公司 | Shape matching algorithm based on annotation |
CN111739003B (en) * | 2020-06-18 | 2022-11-18 | 上海电器科学研究所(集团)有限公司 | Machine vision method for appearance detection |
CN111739003A (en) * | 2020-06-18 | 2020-10-02 | 上海电器科学研究所(集团)有限公司 | Machine vision algorithm for appearance detection |
CN111985500A (en) * | 2020-07-28 | 2020-11-24 | 国网山东省电力公司禹城市供电公司 | Method, system and device for checking relay protection constant value input |
CN111985500B (en) * | 2020-07-28 | 2024-03-29 | 国网山东省电力公司禹城市供电公司 | Verification method, system and device for relay protection fixed value input |
CN112634130A (en) * | 2020-08-24 | 2021-04-09 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator |
CN116433887A (en) * | 2023-06-12 | 2023-07-14 | 山东鼎一建设有限公司 | Building rapid positioning method based on artificial intelligence |
CN116433887B (en) * | 2023-06-12 | 2023-08-15 | 山东鼎一建设有限公司 | Building rapid positioning method based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107392215A (en) | A kind of multigraph detection method based on SIFT algorithms | |
CN103077512B (en) | Based on the feature extracting and matching method of the digital picture that major component is analysed | |
Wang et al. | MARCH: Multiscale-arch-height description for mobile retrieval of leaf images | |
CN104766084B (en) | A kind of nearly copy image detection method of multiple target matching | |
Sirmacek et al. | A probabilistic framework to detect buildings in aerial and satellite images | |
CN103729885B (en) | Various visual angles projection registers united Freehandhand-drawing scene three-dimensional modeling method with three-dimensional | |
Kim et al. | Boundary preserving dense local regions | |
CN107247930A (en) | SAR image object detection method based on CNN and Selective Attention Mechanism | |
CN108921939A (en) | A kind of method for reconstructing three-dimensional scene based on picture | |
CN104850822B (en) | Leaf identification method under simple background based on multi-feature fusion | |
CN106447704A (en) | A visible light-infrared image registration method based on salient region features and edge degree | |
CN105740378B (en) | Digital pathology full-section image retrieval method | |
CN113392856B (en) | Image forgery detection device and method | |
CN107644227A (en) | A kind of affine invariant descriptor of fusion various visual angles for commodity image search | |
Ion et al. | Matching 2D and 3D articulated shapes using the eccentricity transform | |
CN108898269A (en) | Electric power image-context impact evaluation method based on measurement | |
Zhang et al. | 3D tree skeletonization from multiple images based on PyrLK optical flow | |
CN108182705A (en) | A kind of three-dimensional coordinate localization method based on machine vision | |
CN106897722A (en) | A kind of trademark image retrieval method based on region shape feature | |
Zhang et al. | Perception-based shape retrieval for 3D building models | |
Liu et al. | A semi-supervised high-level feature selection framework for road centerline extraction | |
CN103336964B (en) | SIFT image matching method based on module value difference mirror image invariant property | |
Shanmugavadivu et al. | FOSIR: fuzzy-object-shape for image retrieval applications | |
CN106951873A (en) | A kind of Remote Sensing Target recognition methods | |
Bourbakis et al. | Object recognition using wavelets, LG graphs and synthesis of regions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171124 |
|
RJ01 | Rejection of invention patent application after publication |