CN103514434A - Method and device for identifying image - Google Patents

Method and device for identifying image Download PDF

Info

Publication number
CN103514434A
CN103514434A CN201210227208.8A CN201210227208A CN103514434A CN 103514434 A CN103514434 A CN 103514434A CN 201210227208 A CN201210227208 A CN 201210227208A CN 103514434 A CN103514434 A CN 103514434A
Authority
CN
China
Prior art keywords
image
characteristic
classification
contrast
contrast characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210227208.8A
Other languages
Chinese (zh)
Other versions
CN103514434B (en
Inventor
邓宇
吴倩
薛晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201210227208.8A priority Critical patent/CN103514434B/en
Publication of CN103514434A publication Critical patent/CN103514434A/en
Application granted granted Critical
Publication of CN103514434B publication Critical patent/CN103514434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and device for identifying an image. The method and device solve the problems that in an existing image identification process, more resources are consumed, the efficiency is quite low, and the time cost is quite large. The method comprises the steps of establishing a first voting image for one image of each category, being capable of obtaining k comparison characteristics matched with practical characteristics, and obtaining the categories of the comparison characteristics and the relative central positions of the comparison characteristics, so that the various categories are obtained through one-time matching. Subsequently, the similarity can be calculated for the comparison characteristics of each category, the estimated central positions of the comparison characteristics in the first voting images are determined, then the similarities can be increased at the estimated central positions, and finally the first voting images of the various categories can be obtained. Then, the first voting image, corresponding to the estimated central position, with the maximum similarity is obtained, and the category of the corresponding first voting image is the category of the image to be detected.

Description

A kind of image-recognizing method and device
Technical field
The application relates to data processing technique, particularly relates to a kind of image-recognizing method and device.
Background technology
Image recognition, is to utilize computing machine that image is processed, analyzed and understands, to identify the target of various different modes and the technology of object.Conventionally image is all processed into the vector of a N dimension in computer vision application, therefore much about the method for object detection, also sets up on this basis.But unavoidably there are a lot of defects in the gray feature that only depends on image pixel.
Therefore, it is a kind of based on SC(Shape Context that the people such as Serge Belongie utilize the edge of image and shape information to propose, boundary profile) detection method of feature.The people such as Liming Wang propose again a kind of method of estimating object center and confidence level by voting method on the basis of SC feature.First this method obtains the SC feature of each sampled point of image to be detected, because the consistent point of relative position on similar fitgures has similar SC feature conventionally, therefore the SC feature in feature lexicon preset described in SC feature can be mated, and determine corresponding similarity.According to described similarity, get the contrast images of coupling.
But this method is only identified for a class contrast images at every turn, in preset feature lexicon, only comprise the SC feature of such contrast images, in identifying, also can only differentiate described image to be detected and whether comprise such contrast images.In actual processing, contrast images may have n class, needs to set up n feature lexicon, and for each image to be detected, if will determine the classification under described image to be detected, will repeat n time identifying.
Therefore, the above-mentioned more resource of identifying meeting consumption rate, and efficiency is very low, and time overhead is very large.
Summary of the invention
The application provides a kind of image-recognizing method and device, and to solve the more resource of identifying meeting consumption rate of conventional images identification, and efficiency is very low, the very large problem of time overhead.
In order to address the above problem, the application discloses a kind of image-recognizing method, comprising:
For the image of each classification, set up one first ballot image;
Extract the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
For each sampled point, obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center;
For each contrast characteristic's classification, according to described actual characteristic and described contrast characteristic, calculate similarity;
Corresponding the first ballot figure of classification for each contrast characteristic, according to the physical location of described actual characteristic and described contrast characteristic's relative center, determine the estimation center of described contrast characteristic in the first ballot image, and increase described similarity at estimation center position corresponding to described contrast characteristic;
Travel through the similarity of each estimation center position in each first ballot image, obtain the first ballot image corresponding to estimation center of similarity maximum, by the classification of described the first ballot image recognition image to be detected.
Preferably, described method also comprises:
Extract the contrast characteristic of each sampled point of contrast images, described contrast characteristic's classification and described contrast characteristic's relative center, wherein, described relative center is described sampled point apart from the distance of object center in contrast images.
Preferably, described method also comprises:
Contrast characteristic for each sampled point, by described contrast characteristic being carried out n time to cluster step by step, set up n+2 level search tree, wherein, 1 grade of node of described search tree is for searching starting point, the cluster centre that 2 to n+1 level nodes are clusters at different levels, and n+2 level node is described contrast characteristic, n > 1, n is positive integer.
Preferably, obtain k the contrast characteristic of mating with described actual characteristic, comprising:
In described search tree, described actual characteristic is carried out to characteristic matching, search k the contrast characteristic of mating with described actual characteristic.
Preferably, described method also comprises:
Preset x convergent-divergent yardstick, sets up respectively one second ballot image for each the convergent-divergent yardstick under same classification.
Preferably, described according to described actual characteristic and described contrast characteristic, calculate similarity after, also comprise:
According to described convergent-divergent yardstick, the distance in described contrast characteristic's relative center is carried out to convergent-divergent, obtain corresponding convergent-divergent center;
The second ballot image for contrast characteristic's classification under described convergent-divergent yardstick, by the physical location of described actual characteristic, according to the orientation angles in described contrast characteristic's relative center and described convergent-divergent center, be offset, project the estimation center of described contrast characteristic in described the second ballot image;
Estimation center position in described the second ballot image increases described similarity.
Preferably, described method also comprises:
Travel through the similarity of each estimation center position in each second ballot image, obtain the second ballot image corresponding to estimation center of similarity maximum;
By classification and the yardstick of image to be detected described in described the second ballot image recognition.
Preferably, described method also comprises:
For the classification of described image to be detected, identification is used the user of described image to be detected whether to have the rights of using of described classification.
Preferably, described contrast images is cartoon image and/or trademark image.
Accordingly, disclosed herein as well is a kind of pattern recognition device, comprising:
The first ballot image is set up module, for set up one first ballot image for the image of each classification;
Extraction module, for extracting the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
Coupling acquisition module, for for each sampled point, obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center;
Similarity calculation module, for the classification for each contrast characteristic, calculates similarity according to described actual characteristic and described contrast characteristic;
Determine and add module, for corresponding the first ballot figure of the classification for each contrast characteristic, according to the physical location of described actual characteristic and described contrast characteristic's relative center, determine the estimation center of described contrast characteristic in the first ballot image, and add described similarity at estimation center position corresponding to described contrast characteristic;
Obtain and identification module, for traveling through each first ballot image, each estimates the similarity of center position, obtains the first ballot image corresponding to estimation center of similarity maximum, by the classification of described the first ballot image recognition image to be detected.
Compared with prior art, the application comprises following advantage:
First, the application sets up one first ballot image for the image of each classification, then when actual characteristic is mated, can obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center, therefore, the application is by once mating and just can get plurality of classes.The follow-up contrast characteristic for each classification, can calculate similarity, and determine the estimation center of described contrast characteristic in the first ballot image, and then can increase described similarity at described estimation center position, finally can get the first ballot image of a plurality of classifications.Then the first ballot image corresponding to estimation center that obtains similarity maximum, the classification of described the first ballot image is the classification of image to be detected.The application can once identify plurality of classes, identifying saving resource, and efficiency is higher, and time overhead is very low.
Secondly, prior art is carried out characteristic matching by feature lexicon, and owing to mating one by one, so efficiency is very low.The application, for the contrast characteristic of each sampled point, by described contrast characteristic being carried out n time to cluster step by step, sets up n+2 level search tree.1 grade of node of described search tree is for searching starting point, the cluster centre that 2 to n+1 level nodes are clusters at different levels, and n+2 level node is described contrast characteristic.Therefore, the application, when carrying out, is started by 1 grade of node of search tree, successively mate, can find fast k the contrast characteristic of mating with described actual characteristic, further reduced the consumption of resource, and matching efficiency is higher, reduced the time of matching process.
Again, prior art can only identify the contrast images of consistent size conventionally, therefore in order to guarantee not to be subject to the impact of dimensional variation in identifying, therefore prior art can contain as far as possible all sizes in feature lexicon, therefore can cause the feature in feature lexicon too much, during characteristic matching, efficiency is lower.The application is preset x convergent-divergent yardstick, sets up respectively one second ballot image for each the convergent-divergent yardstick under same classification.In identifying, only need, by described convergent-divergent yardstick, convergent-divergent is carried out in described contrast characteristic's relative center, obtain corresponding convergent-divergent center, the follow-up physical location by described actual characteristic and described convergent-divergent center are added, to determine the estimation center of described contrast characteristic in described the second ballot.Thereby the application can further reduce the consumption of resource, and matching efficiency is higher, has reduced the time of matching process.
Again, the application, after having got the classification of described image to be detected, can, further to using the user of described image to be detected to detect, detect the rights of using whether it has described classification.Therefore the application can detect application for the infringement for image, application very extensively.
Accompanying drawing explanation
Fig. 1 is a kind of image-recognizing method process flow diagram described in the embodiment of the present application;
Fig. 2 is contrast images schematic diagram in a kind of image-recognizing method described in the embodiment of the present application;
Fig. 3 is search tree schematic diagram in a kind of image-recognizing method described in the embodiment of the present application;
Fig. 4 is multi-class multiple dimensioned recognition methods process flow diagram in a kind of image-recognizing method described in the application's preferred embodiment;
Fig. 5 is a kind of pattern recognition device structural drawing described in the embodiment of the present application.
Embodiment
For the application's above-mentioned purpose, feature and advantage can be become apparent more, below in conjunction with the drawings and specific embodiments, the application is described in further detail.
Prior art is only being identified for a class contrast images at every turn, therefore, if to determine with described image to be detected under classification, to repeat n time identifying.Therefore can cause resource consumption larger, and efficiency is very low, time overhead is very large.
The application provides a kind of image-recognizing method, can once identify plurality of classes, identifying saving resource, and efficiency is higher, and time overhead is very low.
With reference to Fig. 1, provided a kind of image-recognizing method process flow diagram described in the embodiment of the present application.
Step 11, sets up one first ballot image for the image of each classification;
The application does not limit for the classification of image, for example, the classification of cartoon image can comprise Donald Duck(Donald duck), Mickey(Micky Mouse) and Hello Kitty(Hello Kitty) etc., and for example, for trademark image, each trade mark can be regarded as a classification.
The application is for the image of each classification, sets up one first ballot image, and by described the first ballot image pixel value be a little initialized as 0.In actual treatment, the object in the cartoon image of each classification as Donald Duck may exist multiple different form, but is only considered the classification of image when setting up the first ballot image, does not consider the form of objects in images.No matter which kind of form objects in images may be, all only for a classification, sets up a ballot image.And the disposal route of the form problem of objects in images can be: under each classification, set up a contrast images for the form of each object, thereby can have for the form of each object its corresponding contrast characteristic in contrast images.
In actual treatment, can regard an image as a matrix, a pixel in image is regarded an element in matrix as, therefore, when initial, the value of each pixel of ballot image is 0, and in the matrix of the image of voting, the value of each element is 0.
Step 12, extracts the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
For image to be detected, set in advance several sampled points, therefore can extract the actual characteristic of each sampled point of image to be detected, meanwhile, can also extract the physical location of described actual characteristic, i.e. the physical location of described sampled point in picture to be detected.Described physical location is the coordinate position of sampled point in image to be detected, supposes that, using the summit in the image to be detected upper left corner as true origin (0,0), the coordinate of corresponding sampled point is (x, y), and now x is nonnegative number, and y is non-positive number.Certainly, true origin can also be defined as other positions, and the application does not limit this.
Wherein, sampled point is chosen in image according to a fixed step size, and therefore, the size of image to be detected is different, and the number of sampled point is just different.
Step 13, for each sampled point, obtains k the contrast characteristic of mating with described actual characteristic, and obtains described contrast characteristic's classification and contrast characteristic's relative center;
For each sampled point, obtain k the contrast characteristic of mating with described actual characteristic, also to obtain described contrast characteristic's classification, with described contrast characteristic's relative center simultaneously.
Wherein, the feature that described contrast characteristic is the sampled point that gets from contrast images; Described contrast characteristic's classification is the classification of contrast images under described contrast characteristic; Described contrast characteristic's relative center is the distance and bearing angle at object center in sampled point corresponding to described contrast characteristic and contrast images.
With reference to Fig. 2, provided described in the embodiment of the present application contrast images schematic diagram in a kind of image-recognizing method.
Take cartoon image as example, is the contrast images of Donald Duck classification in Fig. 2, and in Fig. 2, A represents the object in contrast images, i.e. Donald Duck; A1 represents the center of the object in contrast images, i.e. the center of Donald Duck; A2 represents a sampled point in contrast images.
The feature of A2 is contrast characteristic, and contrast characteristic's classification is Donald Duck classification, and contrast characteristic's relative center is the distance and bearing angle between A2 and A1.Suppose that A1 is that the coordinate of initial point (0,0) A2 is (x2, y2), the distance in described relative center is sqrt[(x2) 2+ (y2) 2], wherein sqrt represents extraction of square root, the orientation angles in described relative center is α=arttan (x2/y2).
Certainly, also can adopt the summit in the contrast images upper left corner as true origin, now obtain respectively the coordinate of A1 and A2, with above-mentioned calculating.
Wherein, in contrast images, pixel value is not that 0 point can form the object in contrast images.For example, in cartoon image, in the contrast images of Donald Duck, the object in contrast images is Donald Duck.
Wherein, the application can adopt KNN(k-Nearest Neighbor algorithm, neighbouring node) algorithm, if the great majority in sample K in feature space the most similar (being the most contiguous in feature space) sample belong to some classifications, this sample also belongs to this classification.
Actual characteristic described in the application and contrast characteristic can be SC(Shape Context, edge contour) feature, described SC is characterized as in image and puts and set up polar coordinate system centered by certain point, again polar coordinates are divided into several different Shan Xing regions, according to the distribution of central point surrounding pixel brightness value, calculate proper vector.
Step 14, for each contrast characteristic's classification, calculates similarity according to described actual characteristic and described contrast characteristic;
For each contrast characteristic's classification, can calculate similarity according to described actual characteristic and described contrast characteristic, the value of supposing described similarity is X.Wherein, while calculating similarity, can adopt the methods such as Euclidean distance, Chi-square Test, because the computing method of similarity are prior aries, so the application repeats no more.
By the value of the above-mentioned similarity calculating, can be used for weighing described actual characteristic and described contrast characteristic's similarity degree.
Step 15, corresponding the first ballot figure of classification for each contrast characteristic, according to the physical location of described actual characteristic and described contrast characteristic's relative center, determine the estimation center of described contrast characteristic in the first ballot image, and increase described similarity at estimation center position corresponding to described contrast characteristic;
Wherein, described estimation center be by contrast characteristic and actual characteristic estimated first ballot objects in images center, the classification of supposing contrast characteristic is Donald Duck classification, the center that described estimation center is the Donald Duck that estimates in the first ballot image.
Described contrast characteristic's relative center is the distance and bearing angle at object center in sampled point corresponding to described contrast characteristic and contrast images.If get the coordinate position of sampled point corresponding to described contrast characteristic in contrast images, the point described sampled point being projected according to the orientation angles of described relative center and distance is object center in contrast images.
The physical location of described actual characteristic is sampled point that actual characteristic the is corresponding coordinate position in image to be detected, now, if the physical location of described actual characteristic is regarded as to contrast characteristic's relative center, by the physical location of described actual characteristic, it is the coordinate position of the corresponding sampled point of actual characteristic, according to the orientation angles in described relative center and distance, project, the point projecting is object center corresponding to contrast characteristic, i.e. the estimation center of described contrast characteristic in the first ballot image.
Value due to the above-mentioned similarity calculating, can be used for weighing described actual characteristic and described contrast characteristic's similarity degree, can be used for equally weighing by the possibility at the center of contrast characteristic and estimated the first ballot objects in images of actual characteristic, therefore can add the value X of described similarity at described estimation center position.
In the application, each sampled point of image to be detected extracts an actual characteristic, corresponding K the contrast characteristic of each actual characteristic, so classification corresponding to actual characteristic may be individual for [1, K].Same, even if belong to other two contrast characteristics of same class, may be from contrast images corresponding to lower different shape object of all categories, because the object center of different shape may be identical, also may be different, the estimation center of therefore calculating by above-mentioned two contrast characteristics may be identical, also may be different, and depending on concrete data.If the estimation center that above-mentioned two contrast characteristics calculate is identical, the value of the corresponding corresponding similarity that superposes in described estimation center, it is the similarity of described estimation center position, be the similarity that a plurality of contrast characteristics and actual characteristic calculate, add successively the result that described estimation center position adds up after summation to.
Be in actual treatment, if calculate behind estimation center corresponding to certain contrast characteristic, described estimation center position had added similarity, directly on the basis of described similarity, added up.
Step 16, each estimates the similarity of center position to travel through each first ballot image, obtains the first ballot image corresponding to estimation center of similarity maximum, by the classification of described the first ballot image recognition image to be detected.
By above-mentioned step, in corresponding the first ballot figure of each contrast characteristic's classification, all exist some and estimate center.
Therefore, can travel through the similarity of each estimation center position in each first ballot image, for example, in the first ballot image 1, each similarity of estimating center position is respectively 10,12,15, in the first ballot image 2, each similarity of estimating center position is respectively 20,13,31, the first ballot image corresponding to the follow-up estimation center that obtains all similarity intermediate value maximums in images in all the first ballots, go up all similarity intermediate values in example and be 31 to the maximum, the first corresponding ballot image is the first ballot image 2.Now can be using the classification of described the first ballot image as the classification of described image to be detected.As above in example, the classification of described the first ballot image 2 is Hello Kitty, and the classification of described image to be detected is also Hello Kitty.
In sum, the application sets up one first ballot image for the image of each classification, then when actual characteristic is mated, can obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center, therefore, the application is by once mating and just can get plurality of classes.The follow-up contrast characteristic for each classification, can calculate similarity, and determine the estimation center of described contrast characteristic in the first ballot image, and then can increase described similarity at described estimation center position, finally can get the first ballot image of a plurality of classifications.Then the first ballot image corresponding to estimation center that obtains similarity maximum, the classification of described the first ballot image is the classification of image to be detected.The application can once identify plurality of classes, identifying saving resource, and efficiency is higher, and time overhead is very low.
Preferably, extract the contrast characteristic of each sampled point of contrast images, described contrast characteristic's classification and described contrast characteristic's relative center, wherein, described relative center is that described sampled point is apart from the distance of the object center of contrast images.
In the application, can set up in advance training sample set, the sample in described training sample set is contrast images.
Then can extract the contrast characteristic of each sampled point of contrast images, extract described contrast characteristic's classification, it is the classification of contrast images under described contrast characteristic, extract described contrast characteristic's relative center, the distance and bearing angle at the object center that described relative center is described sampled point and contrast images simultaneously.
Wherein, described relative center, referring to the relevant discussion of Fig. 2, repeats no more herein.
In actual treatment, object in the image of each classification may have different forms, therefore under each classification, can set up a contrast images for the form of each object, what make that sample in training sample set contains is more comprehensive, and then the contrast characteristic who extracts sampled point can the contrast images corresponding from the object of each different shape, makes corresponding contrast characteristic also more comprehensive, the more comprehensively foundation that provides for follow-up image recognition, improves accuracy.
Preferably, described contrast images is cartoon image and/or trademark image.
Contrast images described in the application can be cartoon image, or trademark image, may be also cartoon image and trademark image certainly.
The classification of described cartoon image can be concrete cartoon figure, and described trademark image can be each trade mark, and the application does not limit this.
With reference to Fig. 3, provided described in the embodiment of the present application search tree schematic diagram in a kind of image-recognizing method.
Preferably, contrast characteristic for each sampled point, by described contrast characteristic being carried out n time to cluster step by step, set up n+2 level search tree, wherein, 1 grade of node of described search tree is for searching starting point, the cluster centre that 2 to n+1 level nodes are clusters at different levels, n+2 level node is described contrast characteristic, n > 1, n is positive integer.
Prior art is carried out characteristic matching by feature lexicon, at feature lexicon, comprises all contrast characteristics that extract from contrast images, in characteristic matching, is to travel through wherein all contrast characteristics, mates one by one very low of efficiency.
And the application is for the contrast characteristic of said extracted, can carry out cluster to described contrast characteristic, for example, adopt K-means clustering method.Lower mask body is discussed the process of establishing of search tree.
1) 1 grade of node of described search tree is made as and searches starting point;
2) to described contrast characteristic, adopt K-means method to carry out 1 grade of cluster, get K 1individual 1 grade of cluster centre, using each 1 grade of cluster centre as 2 grades of nodes;
3) contrast characteristic under each 1 grade of cluster centre is carried out to 2 grades of clusters, get K 2individual 2 grades of cluster centres, using each 2 grades of cluster centre as 3 grades of nodes;
4) by that analogy, the contrast characteristic under each n-1 level cluster centre is carried out to n level cluster, get K n-1individual n level cluster centre, using each n level cluster centre as a n+1 level node, until n level cluster centre cannot carry out subordinate's cluster.N+2 level node is described contrast characteristic.
Wherein, n > 1, n is positive integer.
Certainly, can also preset the size of n, then after reaching default size, stop cluster.
By traveling through after above-mentioned search tree, the application is when carrying out characteristic matching, can be started by 1 grade of node of search tree, successively mate, can find fast k the contrast characteristic of mating with described actual characteristic, further reduced the consumption of resource, and matching efficiency is higher, has reduced the time of matching process.
With reference to Fig. 4, provided described in the application's preferred embodiment multi-class multiple dimensioned recognition methods process flow diagram in a kind of image-recognizing method.
Step 401, preset x convergent-divergent yardstick, sets up respectively one second ballot image for each the convergent-divergent yardstick under same classification;
Because the size of image is variable, and prior art can only identify the contrast images of consistent size conventionally, therefore in order to guarantee not to be subject to the impact of dimensional variation in identifying, therefore prior art can contain as far as possible all sizes in feature lexicon, therefore can cause the feature in feature lexicon too much, during characteristic matching, efficiency is lower.
The application, in order further to reduce the time that coupling is wasted, reduces the consumption of resource, and preset x convergent-divergent yardstick, then sets up respectively one second image of voting for each the convergent-divergent yardstick under same classification.
For example, establishing convergent-divergent yardstick has 3, is respectively 0.5,1 and 2.3.Now for each classification, will set up 3 second ballot images, wherein, the corresponding convergent-divergent yardstick of the second ballot image 1 is 0.5; The corresponding convergent-divergent yardstick of the second ballot image 2 is 1; The corresponding convergent-divergent yardstick of the second ballot image 3 is 2.3.And although convergent-divergent yardstick difference corresponding to each the second ballot image, size of each the second ballot image is identical.
Step 402, extracts the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
Step 403 for each sampled point, is carried out characteristic matching to described actual characteristic in described search tree, searches k the contrast characteristic of mating with described actual characteristic, obtains described contrast characteristic's classification and contrast characteristic's relative center;
For each sampled point, a corresponding actual characteristic of sampled point, therefore can in described search tree, to described actual characteristic, carry out characteristic matching, according to the classification described in described actual characteristic, search k the contrast characteristic of mating with described actual characteristic, obtain described contrast characteristic's classification and contrast characteristic's relative center.
Step 404, for each contrast characteristic's classification, calculates similarity according to described actual characteristic and described contrast characteristic;
Step 405, carries out convergent-divergent according to described convergent-divergent yardstick to the distance in described contrast characteristic's relative center, obtains corresponding convergent-divergent center;
The variation of graphical rule, only can change the distance between two points in image, and can not change the orientation angles between two points in image.Therefore, convergent-divergent yardstick only carries out convergent-divergent to the distance in described contrast characteristic's relative center, and can not cause the change of the angle in relative center.
For example, the distance in described contrast characteristic's relative center is 20, and described convergent-divergent yardstick is 0.5, and corresponding convergent-divergent center is 20*0.5=10.
Because the size of the second ballot image corresponding to different zoom yardstick is identical, therefore by convergent-divergent yardstick, the distance in described contrast characteristic's relative center is carried out after convergent-divergent, can get the convergent-divergent center that each convergent-divergent yardstick is corresponding.
The application carries out convergent-divergent to the distance in contrast characteristic's relative center, only, by arithmetic operator simply, just can solve identifying mesoscale and change the impact bringing, the matching process of relative and mechanization, can reduce a large amount of time, improve the efficiency of identification.
Step 406, the second ballot image for contrast characteristic's classification under described convergent-divergent yardstick, by the physical location of described actual characteristic, according to the orientation angles in described contrast characteristic's relative center and described convergent-divergent center, be offset, project the estimation center of described contrast characteristic in described the second ballot image;
The second ballot image for contrast characteristic's classification under described convergent-divergent yardstick, the above-mentioned convergent-divergent center of described contrast characteristic under corresponding convergent-divergent yardstick that got, now under described convergent-divergent yardstick, the orientation angles in contrast characteristic's center is constant, but distance in contrast characteristic's center changes to convergent-divergent center.
If the physical location of described actual characteristic is regarded as to contrast characteristic's relative center, by the physical location of described actual characteristic, it is the coordinate position of the corresponding sampled point of actual characteristic, according to the distance and bearing angle in described contrast characteristic's relative center, be offset, be about to the coordinate position of the corresponding sampled point of actual characteristic, according to the orientation angles in the center of contrast characteristic under described convergent-divergent yardstick and described convergent-divergent center, be offset, point after skew projects in the ballot of second under described convergent-divergent yardstick image, be the estimation center of described contrast characteristic in described the second ballot image.
Step 407, the estimation center position in described the second ballot image increases described similarity.
Estimation center position in described the second ballot image, the value of the similarity that interpolation calculates by contrast characteristic and actual characteristic.If described similarity is 10, before adding, if estimate, the value of center position is 0, after adding, estimates that the value of center position is 10; Before interpolation, if estimate, the value of center position is 13, after adding, estimates that the value of center position is 23.
Step 408, travels through that in each second ballot image, each estimates the similarity of center position, obtains the second ballot image corresponding to estimation center of similarity maximum;
Travel through the similarity of each estimation center position in each second ballot image, in all the second ballot images, obtain the second ballot image corresponding to estimation center of all similarity intermediate value maximums, basically identical with above-mentioned steps 16 places discussion herein, therefore repeat no more.
Step 409, by classification and the size of image to be detected described in described the second ballot image recognition;
The second ballot image that the above-mentioned estimation center that obtains similarity maximum is corresponding, classification and the size of described the second ballot image, be classification and the size of described image to be detected.
Certainly, in actual treatment, travel through after all the second ballot images, can also obtain the second ballot image (n is positive integer) of the front n of the similarity rank position of estimating center, then according to the value of the similarity of the estimation center of described the second ballot image and distribution thereof, whether collect moderate condition, determine the second ballot image meeting,, whether the classification that detects described the second ballot image is the classification of described image to be detected.
Step 410, for the classification of described image to be detected, identifies the rights of using whether described image to be detected has described classification.
By above-mentioned calculating, get the classification of described image to be detected, therefore can further detect the user who uses image to be detected, whether there are the rights of using of described classification.Whether detect described user has the behavior of infringement.
For example, in electronic transaction website, the classification that has got described image to be detected is Hello Kitty, can further detect to use whether user's (being seller) of image to be detected is the appointment of agent of Hello Kitty.
In sum, prior art is carried out characteristic matching by feature lexicon, and owing to mating one by one, so efficiency is very low.The application, for the contrast characteristic of each sampled point, by described contrast characteristic being carried out n time to cluster step by step, sets up n+2 level search tree.1 grade of node of described search tree is for searching starting point, the cluster centre that 2 to n+1 level nodes are clusters at different levels, and n+2 level node is described contrast characteristic.Therefore, the application, when carrying out, is started by 1 grade of node of search tree, successively mate, can find fast k the contrast characteristic of mating with described actual characteristic, further reduced the consumption of resource, and matching efficiency is higher, reduced the time of matching process.
Secondly, prior art can only identify the contrast images of consistent size conventionally, therefore in order to guarantee not to be subject to the impact of dimensional variation in identifying, therefore prior art can contain as far as possible all sizes in feature lexicon, therefore can cause the feature in feature lexicon too much, during characteristic matching, efficiency is lower.The application is preset x convergent-divergent yardstick, sets up respectively one second ballot image for each the convergent-divergent yardstick under same classification.In identifying, only need, by described convergent-divergent yardstick, convergent-divergent is carried out in described contrast characteristic's relative center, obtain corresponding convergent-divergent center, the follow-up physical location by described actual characteristic and described convergent-divergent center are added, to determine the estimation center of described contrast characteristic in described the second ballot.Thereby the application can further reduce the consumption of resource, and matching efficiency is higher, has reduced the time of matching process.
Again, the application, after having got the classification of described image to be detected, can, further to using the user of described image to be detected to detect, detect the rights of using whether it has described classification.Therefore the application can detect application for the infringement for image, application very extensively.
With reference to Fig. 5, provided a kind of pattern recognition device structural drawing described in the embodiment of the present application.
Accordingly, the application also provides a kind of pattern recognition device, comprising: the first ballot image sets up module 11, extraction module 12, coupling acquisition module 13, similarity calculation module 14, determine and add module 15 and obtain also identification module 16, wherein:
The first ballot image is set up module 11, for set up one first ballot image for the image of each classification;
Extraction module 12, for extracting the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
Coupling acquisition module 13, for for each sampled point, obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center;
Similarity calculation module 14, for the classification for each contrast characteristic, calculates similarity according to described actual characteristic and described contrast characteristic;
Determine and add module 15, for corresponding the first ballot figure of the classification for each contrast characteristic, according to the physical location of described actual characteristic and described contrast characteristic's relative center, determine the estimation center of described contrast characteristic in the first ballot image, and increase described similarity at estimation center position corresponding to described contrast characteristic;
Obtain and identification module 16, for traveling through each first ballot image, each estimates the similarity of center position, obtains the first ballot image corresponding to estimation center of similarity maximum, by the classification of described the first ballot image recognition image to be detected.
Preferably, described device also comprises:
Extraction module, the contrast characteristic of each sampled point who is used for extracting contrast images is, described contrast characteristic's classification and described contrast characteristic's relative center, wherein, described relative center is described sampled point apart from the distance of object center in contrast images.
Search tree is set up module, for the contrast characteristic for each sampled point, by described contrast characteristic being carried out n time to cluster step by step, set up n+2 level search tree, wherein, 1 grade of node of described search tree is for searching starting point, the cluster centre that 2 to n+1 level nodes are clusters at different levels, n+2 level node is described contrast characteristic, n > 1, n is positive integer.
Described coupling acquisition module 13, for described actual characteristic being carried out to characteristic matching described search tree, search k the contrast characteristic of mating with described actual characteristic.
Preferably, described device also comprises:
The second ballot image is set up module, for preset x convergent-divergent yardstick, for each the convergent-divergent yardstick under same classification, sets up respectively one second ballot image.
Describedly determine and add module 15, comprising:
Convergent-divergent submodule, for the distance of described contrast characteristic's relative center being carried out to convergent-divergent according to described convergent-divergent yardstick, obtains corresponding convergent-divergent center;
Submodule is determined in center, the second ballot image for the classification for contrast characteristic under described convergent-divergent yardstick, by the physical location of described actual characteristic, according to the orientation angles in described contrast characteristic's relative center and described convergent-divergent center, be offset, project the estimation center of described contrast characteristic in described the second ballot image;
Add submodule, for the estimation center position at described the second ballot image, increase described similarity.
Preferably, obtain and identification module 16, also for traveling through each second ballot image, each estimates the similarity of center position, obtains the second ballot image corresponding to estimation center of similarity maximum; By classification and the yardstick of image to be detected described in described the second ballot image recognition.
Preferably, described device also comprises:
Authority recognition module, for the classification for described image to be detected, identifies the rights of using whether described image to be detected has described classification.
Preferably, described contrast images is cartoon image and/or trademark image.
For system embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, relevant part is referring to the part explanation of embodiment of the method.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, and each embodiment stresses is the difference with other embodiment, between each embodiment identical similar part mutually referring to.
Those skilled in the art should understand, the application's embodiment can be provided as method, system or computer program.Therefore, the application can adopt complete hardware implementation example, implement software example or in conjunction with the form of the embodiment of software and hardware aspect completely.And the application can adopt the form that wherein includes the upper computer program of implementing of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code one or more.
Although described the application's preferred embodiment, once those skilled in the art obtain the basic creative concept of cicada, can make other change and modification to these embodiment.So claims are intended to all changes and the modification that are interpreted as comprising preferred embodiment and fall into the application's scope.
The application is with reference to describing according to process flow diagram and/or the block scheme of the method for the embodiment of the present application, equipment (system) and computer program.Should understand can be in computer program instructions realization flow figure and/or block scheme each flow process and/or the flow process in square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction of carrying out by the processor of computing machine or other programmable data processing device is produced for realizing the device in the function of flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame appointments.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame on computing machine or other programmable devices.
Finally, also it should be noted that, in this article, relational terms such as the first and second grades is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply and between these entities or operation, have the relation of any this reality or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby the process, method, commodity or the equipment that make to comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or be also included as the intrinsic key element of this process, method, commodity or equipment.The in the situation that of more restrictions not, the key element being limited by statement " comprising ... ", and be not precluded within process, method, commodity or the equipment that comprises described key element and also have other identical element.
A kind of image-recognizing method and the device that above the application are provided, be described in detail, applied specific case herein the application's principle and embodiment are set forth, the explanation of above embodiment is just for helping to understand the application's method and core concept thereof; Meanwhile, for one of ordinary skill in the art, the thought according to the application, all will change in specific embodiments and applications, and in sum, this description should not be construed as the restriction to the application.

Claims (10)

1. an image-recognizing method, is characterized in that, comprising:
For the image of each classification, set up one first ballot image;
Extract the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
For each sampled point, obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center;
For each contrast characteristic's classification, according to described actual characteristic and described contrast characteristic, calculate similarity;
Corresponding the first ballot figure of classification for each contrast characteristic, according to the physical location of described actual characteristic and described contrast characteristic's relative center, determine the estimation center of described contrast characteristic in the first ballot image, and increase described similarity at estimation center position corresponding to described contrast characteristic;
Travel through the similarity of each estimation center position in each first ballot image, obtain the first ballot image corresponding to estimation center of similarity maximum, by the classification of described the first ballot image recognition image to be detected.
2. method according to claim 1, is characterized in that, also comprises:
Extract the contrast characteristic of each sampled point of contrast images, described contrast characteristic's classification and described contrast characteristic's relative center, wherein, the distance and bearing angle that described relative center is object center in described sampled point and contrast images.
3. method according to claim 2, is characterized in that, also comprises:
Contrast characteristic for each sampled point, by described contrast characteristic being carried out n time to cluster step by step, set up n+2 level search tree, wherein, 1 grade of node of described search tree is for searching starting point, the cluster centre that 2 to n+1 level nodes are clusters at different levels, and n+2 level node is described contrast characteristic, n > 1, n is positive integer.
4. method according to claim 3, is characterized in that, obtains k the contrast characteristic of mating with described actual characteristic, comprising:
In described search tree, described actual characteristic is carried out to characteristic matching, search k the contrast characteristic of mating with described actual characteristic.
5. method according to claim 1, is characterized in that, also comprises:
Preset x convergent-divergent yardstick, sets up respectively one second ballot image for each the convergent-divergent yardstick under same classification.
6. method according to claim 5, is characterized in that, described according to described actual characteristic and described contrast characteristic, calculate similarity after, also comprise:
According to described convergent-divergent yardstick, the distance in described contrast characteristic's relative center is carried out to convergent-divergent, obtain corresponding convergent-divergent center;
The second ballot image for contrast characteristic's classification under described convergent-divergent yardstick, by the physical location of described actual characteristic, according to the orientation angles in described contrast characteristic's relative center and described convergent-divergent center, be offset, project the estimation center of described contrast characteristic in described the second ballot image;
Estimation center position in described the second ballot image increases described similarity.
7. method according to claim 6, is characterized in that, also comprises:
Travel through the similarity of each estimation center position in each second ballot image, obtain the second ballot image corresponding to estimation center of similarity maximum;
By classification and the yardstick of image to be detected described in described the second ballot image recognition.
8. according to the arbitrary described method of claim 1 or 7, it is characterized in that, also comprise:
For the classification of described image to be detected, identification is used the user of described image to be detected whether to have the rights of using of described classification.
9. method according to claim 8, is characterized in that, described contrast images is cartoon image and/or trademark image.
10. a pattern recognition device, is characterized in that, comprising:
The first ballot image is set up module, for set up one first ballot image for the image of each classification;
Extraction module, for extracting the actual characteristic of each sampled point and the physical location of described actual characteristic of image to be detected;
Coupling acquisition module, for for each sampled point, obtain k the contrast characteristic of mating with described actual characteristic, and obtain described contrast characteristic's classification and contrast characteristic's relative center;
Similarity calculation module, for the classification for each contrast characteristic, calculates similarity according to described actual characteristic and described contrast characteristic;
Determine and add module, for corresponding the first ballot figure of the classification for each contrast characteristic, according to the physical location of described actual characteristic and described contrast characteristic's relative center, determine the estimation center of described contrast characteristic in the first ballot image, and increase described similarity at estimation center position corresponding to described contrast characteristic;
Obtain and identification module, for traveling through each first ballot image, each estimates the similarity of center position, obtains the first ballot image corresponding to estimation center of similarity maximum, by the classification of described the first ballot image recognition image to be detected.
CN201210227208.8A 2012-06-29 2012-06-29 Method and device for identifying image Active CN103514434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210227208.8A CN103514434B (en) 2012-06-29 2012-06-29 Method and device for identifying image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210227208.8A CN103514434B (en) 2012-06-29 2012-06-29 Method and device for identifying image

Publications (2)

Publication Number Publication Date
CN103514434A true CN103514434A (en) 2014-01-15
CN103514434B CN103514434B (en) 2017-04-12

Family

ID=49897133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210227208.8A Active CN103514434B (en) 2012-06-29 2012-06-29 Method and device for identifying image

Country Status (1)

Country Link
CN (1) CN103514434B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019898A (en) * 2017-08-08 2019-07-16 航天信息股份有限公司 A kind of animation image processing system
CN111026641A (en) * 2019-11-14 2020-04-17 北京云聚智慧科技有限公司 Picture comparison method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
US20120020558A1 (en) * 2010-07-24 2012-01-26 Canon Kabushiki Kaisha Method for estimating attribute of object, apparatus thereof, and storage medium
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
US20120020558A1 (en) * 2010-07-24 2012-01-26 Canon Kabushiki Kaisha Method for estimating attribute of object, apparatus thereof, and storage medium
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙彩堂,等.: "加权K近邻和加权投票相结合的虹膜识别算法", 《小型微型计算机系统》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019898A (en) * 2017-08-08 2019-07-16 航天信息股份有限公司 A kind of animation image processing system
CN111026641A (en) * 2019-11-14 2020-04-17 北京云聚智慧科技有限公司 Picture comparison method and electronic equipment

Also Published As

Publication number Publication date
CN103514434B (en) 2017-04-12

Similar Documents

Publication Publication Date Title
CN108288088B (en) Scene text detection method based on end-to-end full convolution neural network
Zhang et al. A robust, real-time ellipse detector
Mittal et al. Generalized projection-based M-estimator
Rangel et al. Semi-supervised 3D object recognition through CNN labeling
Hashemifar et al. Augmenting visual SLAM with Wi-Fi sensing for indoor applications
CN104036287A (en) Human movement significant trajectory-based video classification method
CN105809651A (en) Image saliency detection method based on edge non-similarity comparison
CN114677565B (en) Training method and image processing method and device for feature extraction network
Tsintotas et al. Tracking‐DOSeqSLAM: A dynamic sequence‐based visual place recognition paradigm
CN112336342A (en) Hand key point detection method and device and terminal equipment
CN104240231A (en) Multi-source image registration based on local structure binary pattern
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
Lai et al. Robust model fitting based on greedy search and specified inlier threshold
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
Lin et al. Hierarchical representation via message propagation for robust model fitting
Ding et al. Efficient vanishing point detection method in unstructured road environments based on dark channel prior
Chien et al. Indirect visual simultaneous localization and mapping based on linear models
Li et al. Loop closure detection based on image semantic segmentation in indoor environment
Guo et al. UDTIRI: An online open-source intelligent road inspection benchmark suite
CN103514434A (en) Method and device for identifying image
CN111951211B (en) Target detection method, device and computer readable storage medium
CN107170004A (en) To the image matching method of matching matrix in a kind of unmanned vehicle monocular vision positioning
Mei et al. Learning multi-frequency integration network for RGBT tracking
Wu et al. Mixed Pattern Matching‐Based Traffic Abnormal Behavior Recognition
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1191718

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1191718

Country of ref document: HK