CN101807257A - Method for identifying information of image tag - Google Patents
Method for identifying information of image tag Download PDFInfo
- Publication number
- CN101807257A CN101807257A CN201010169350A CN201010169350A CN101807257A CN 101807257 A CN101807257 A CN 101807257A CN 201010169350 A CN201010169350 A CN 201010169350A CN 201010169350 A CN201010169350 A CN 201010169350A CN 101807257 A CN101807257 A CN 101807257A
- Authority
- CN
- China
- Prior art keywords
- image
- unique point
- template image
- identified
- surf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a method for identifying information of an image tag in the technical field of image processing. The method comprises the following steps of: building a template image library; performing Hessian characteristic point detection on a template image and an image to be identified respectively; performing SURF characteristic extraction and LBP characteristic extraction on each characteristic point in the template image and the image to be identified respectively; performing SURF characteristic matching on the image to be identified and the template image to acquire the template image successfully prematched; performing LBP characteristic matching on the image to be identified and the template image successfully prematched to acquire the template image successfully prematched for the second time; and weighting the template image successfully prematched for the second time, wherein the template image with the maximum weight value is the template image most matched with the image to be identified. The method can effectively identify the image tag with complex background, scale change, direction change and visual angle change, avoids large-scale machine learning and complex image preprocessing at the same time, and has the advantages of high calculation speed, high identification accuracy rate and wide application scenarios.
Description
Technical field
What the present invention relates to is a kind of method of technical field of image processing, specifically is a kind of method for identifying information of image tag.
Background technology
At present, recognition technology based on the label of image is used very extensive, this technology can identify the specific label in piece image or the frame of video, comprise station symbol, trade mark, logo, advertising sign or the like, have very important and application widely in fields such as advertising message statistics, picture/video frequency searching and bad multimedia messages screenings.
Tag recognition can be expanded as the object recognition technology conceptive, more deep development is arranged in the west, but traditional tag recognition lays particular emphasis on features such as the structure, form, color, statistical information of object, but a very big problem is arranged like this, be exactly that anti-interference is poor, the size of object, deformation, inclination, pollution, brightness variation, fracture, change color, visual angle change etc. all can cause influence very greatly to the accuracy rate of identification; The label of the label in west and China has some differences simultaneously, comprises some elements as morphosis more complicated such as Chinese characters usually in the label of China, uses traditional features such as planform more weak; In addition, traditional object identification is used such as technology such as neural network, Bayesian network, svm more, and this class technology all needs to carry out a large amount of machine learning processes, and machine learning is a loaded down with trivial details and complicated technology.
David G Lowe is on the basis of summing up existing characteristic detection method based on the invariant technology in 2004, SIFT (Scale-invariant feature transform has been proposed, yardstick unchangeability Feature Conversion) method, it is a kind of based on metric space, keep the image local feature of certain stability to describe operator to image zoom, rotation, affined transformation, illumination variation, but 128 dimensional vectors that the SIFT method generates can not be satisfactory on matching speed, and consuming time also many when feature point detection and calculated characteristics vector; Yan Ke on the SIFT basis with SIFT in histogram method change the pivot analysis method of doing, the proper vector that obtains behind pca (principal component analysis (PCA)) dimensionality reduction has only 20 dimensions, reduced match time, but its feature generates consuming timely hows much than SIFT, is a kind of method that exchanges match time with the feature calculation time for.
Said method is being sought unique point, all exist calculated amount big on the calculated characteristics point proper vector, many shortcomings consuming time, consider this point, Herbert Bay etc. have proposed SURF (Speeded Up Robust Features in 2006, the fast robust feature) method, it uses quick Hessian method that the integral image of original image is carried out feature point detection, by x in the calculated characteristics point adjacency circle territory, the little wave response of Haar on the y direction obtains principal direction, select a block size and the corresponding square region of yardstick in unique point, be divided into 64, add up the dx of each piece, dy, | dx|, | the accumulation of dy| and, obtain 64 dimensional feature vectors (can be extended for 128 dimensions and improve precision), keeping good yardstick unchangeability, rotational invariance, the brightness unchangeability, affine unchangeability and to the low sensitivity that pollutes the time has desirable computing time and match time.
Through existing literature search is found, Chinese patent literature number is: CN101561866, and, name is called: based on the character recognition method of SIFT feature and gray scale difference value histogram feature, the proper vector that this technology SIFT feature and gray scale difference value histogram produce simply merges, and is used for characteristic matching; Chinese patent literature number is: CN101339601, name is called: a kind of car plate Chinese characters recognition method based on the SIFT method, this technology are isolated and are carried out car plate with the SIFT feature behind the license plate area and mate.But the shortcoming of above-mentioned two technology is: range of application is narrow, and recognition speed is slow, and the accuracy rate of identification is low.
Summary of the invention
The objective of the invention is to overcome the above-mentioned deficiency of prior art, a kind of method for identifying information of image tag is provided, the present invention is characterized as main means of identification with SURF, be aided with the LBP feature and improve recognition accuracy, can discern effectively the label of background complexity, deformation, inclination, dirt, partial occlusion, light variation, change color and visual angle change in image or the frame of video etc., can be widely used in the identification of station symbol, trade mark, logo (trade mark), advertising sign etc.
The present invention is achieved by the following technical solutions, the present invention includes following steps:
Step 1 is collected some template images, and the background of template image is not limit, but the non-template zone around the template image is the least possible, thereby sets up the template image storehouse.
Step 2 is carried out the Hessian feature point detection to template image, obtains the unique point of template image; Image to be identified is carried out the Hessian feature point detection, obtain the unique point of image to be identified; With the top left corner pixel point be initial point, laterally be to the right x axle positive dirction, vertically downwards be that y axle positive dirction is set up rectangular coordinate system to template image and image to be identified respectively, obtain the positional information and the yardstick information of each unique point of each unique point of template image and image to be identified.
Described Hessian feature point detection, specifically: the 2-d gaussian filters device that uses some different scales carries out convolution to each pixel of image, obtain each pixel at the Hessian of some metric spaces entry of a matrix element, and then obtaining each pixel Hessian matrix determinant in each metric space respectively, Hessian matrix determinant and minimum Hessian matrix determinant institute corresponding pixel points maximum in each metric space are exactly the unique point of this image.
The yardstick of the metric space at the Hessian matrix place that described yardstick information is the unique point correspondence.
Step 3 is carried out SURF feature extraction and LBP feature extraction to each unique point of template image, obtains the SURF feature and the LBP feature of each unique point in the template image; Each unique point of image to be identified is carried out SURF feature extraction and LBP feature extraction, obtain the SURF proper vector and the LBP proper vector of each unique point in the image to be identified.
Described SURF feature extraction, specifically: unique point in the image is carried out the principal direction rotation, and the neighborhood of 6 times of its place yardsticks around this unique point is divided into 16 of 4*4, equally distributed 25 pixels in each piece are carried out the haar wavelet transformation, with the dx that obtains, | dx|, dy, | dy| adds up respectively, thereby each piece obtains 4 eigenwerts, around this unique point the neighborhood of 6 times of its place yardsticks altogether 16*4 be 64 eigenwerts, these 64 eigenwerts are just formed the SURF proper vector of this unique point.
Described principal direction rotation, specifically: the pixel in the neighborhood of 6 times of its place yardsticks around the unique point in the image is done the haar wavelet transformation, obtain the haar small echo response of each pixel in this neighborhood; In this neighborhood, rotate around this unique point with 15 ° step-length with a window that covers 30 °, the little wave response of the haar component sum in the x and y direction of pixel in the zone that statistical window covers, the component sum forms the new vector of this neighborhood, the direction of the vectorial indication of length maximum is exactly the principal direction of this unique point in all these vectors, and this characteristic point coordinates system is rotated to be principal direction.
Described LBP feature extraction, specifically: 8 gray values of pixel points around each unique point and its in the movement images, the weight that gray-scale value is greater than or equal to the pixel of this unique point gray-scale value is 10, gray-scale value is 00 less than the weight of the pixel of this unique point gray-scale value, begin the reverse weight that reads these 8 pixels from the pixel in the upper left corner of unique point, the 8bit vector that these weights are formed is exactly the LBP proper vector of this unique point.
Step 4 is carried out the SURF characteristic matching with the set of SURF proper vector and the SURF proper vector set of template image of image to be identified, obtains the template image of pre-matching success.
Described SURF characteristic matching, specifically:
1) obtains in image to be identified and the template image Euclidean distance of the SURF proper vector between every pair of corresponding unique point respectively, as the Euclidean distance D of minimum
1With inferior little Euclidean distance D
2Difference less than threshold value T
1The time, this unique point in this template image is labeled as the unique point of pre-matching success; Otherwise this unique point is labeled as the unsuccessful unique point of pre-matching;
2) in the number of the unique point of pre-matching success in the template image and this template image the ratio of the number of the unsuccessful unique point of pre-matching greater than threshold value T
2The time, this template image is labeled as the template image of pre-matching success.
Described threshold value T
1Span be: 0.5~0.7.
Described threshold value T
2Span be: 0.15~0.3.
Step 5 is carried out the LBP characteristic matching with the set of LBP proper vector and the LBP proper vector set of the template image of pre-matching success of image to be identified, obtains the secondary template image that the match is successful.
Described LBP characteristic matching, specifically:
1) obtains the Euclidean distance of the LBP proper vector between every pair of corresponding in the template image of image to be identified and pre-matching success unique point respectively, as the Euclidean distance D ' of minimum
1With inferior little Euclidean distance D '
2Difference less than threshold value T
1The time, this unique point in the template image of this pre-matching success is labeled as the secondary unique point that the match is successful; Otherwise this unique point is labeled as secondary and mates unsuccessful unique point;
2) secondary mates the ratio of number of unsuccessful unique point greater than threshold value T in the template image of the number of the secondary unique point that the match is successful in the template image of pre-matching success and this pre-matching success
3The time, the template image of this pre-matching success is labeled as the secondary template image that the match is successful.
Described threshold value T
3Span be: 0.2~0.4.
Step 6 is weighted processing to the secondary template image that the match is successful, obtains the weighted value of each secondary template image that the match is successful, and wherein the template image of weighted value maximum is exactly the template image that mates most with image to be identified.
Described weighted is:
Wherein: Q is the weighted value of the secondary template image that the match is successful, D
1Be that this secondary template image that the match is successful carries out the minimum euclidean distance between unique point in the SURF characteristic matching, D
2Be that this secondary template image that the match is successful carries out the inferior little Euclidean distance between unique point in the SURF characteristic matching, D '
1Be that this secondary template image that the match is successful carries out the minimum euclidean distance between unique point in the LBP characteristic matching, D '
2Be that this secondary template image that the match is successful carries out the inferior little Euclidean distance between unique point in the LBP characteristic matching.
Compared with prior art, the invention has the beneficial effects as follows: the present invention utilizes the local SURF feature and the LBP feature of label critical area to come identification label, has the good noise immunity to background complexity, deformation, inclination, dirt, partial occlusion, light variation, change color and visual angle change in image or the frame of video; Need not to treat recognition image or frame of video and carry out the pretreatment operation of very complicateds such as slant correction, scale; Need not to carry out a large amount of machine learning, and the relatively similar SIFT method of SURF method, move sooner, efficient is higher, saving operation time, and computing accuracy rate height; The label of middle finger of the present invention can expand to the object in all pictures during concrete operations, as literal, car plate, station symbol etc., its range of application is very extensive.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention;
Fig. 2 is the image synoptic diagram to be identified of embodiment;
Fig. 3 is the synoptic diagram in the template image storehouse of embodiment;
Fig. 4 is the unique point synoptic diagram of Fig. 2;
Fig. 5 is the unique point synoptic diagram of Fig. 3;
Fig. 6 is the final match condition synoptic diagram of embodiment.
Embodiment
Below in conjunction with accompanying drawing method of the present invention is further described: present embodiment is being to implement under the prerequisite with the technical solution of the present invention, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment
Present embodiment is used for image to be identified shown in Figure 2 is discerned, and its process flow diagram specifically may further comprise the steps as shown in Figure 1:
Step 1 is collected some template images, and the background of template image is not limit, but the non-template zone around the template image is the least possible, thereby sets up the template image storehouse.
The template image storehouse of present embodiment comprises image to be identified as shown in Figure 3 among Fig. 3.
Step 2 is carried out the Hessian feature point detection to template image, obtains the unique point of template image; Image to be identified is carried out the Hessian feature point detection, obtain the unique point of image to be identified; With the top left corner pixel point be initial point, laterally be to the right x axle positive dirction, vertically downwards be that y axle positive dirction is set up rectangular coordinate system to template image and image to be identified respectively, the positional information and the yardstick information of each unique point of extraction template image and each unique point of image to be identified.
Described Hessian feature point detection, specifically: the 2-d gaussian filters device that uses some different scales carries out convolution to each pixel of image, obtain each pixel at the Hessian of some metric spaces entry of a matrix element, and then obtaining each pixel Hessian matrix determinant in each metric space respectively, Hessian matrix determinant and minimum Hessian matrix determinant institute corresponding pixel points maximum in each metric space are exactly the unique point of this image.
The yardstick of the metric space at the Hessian matrix place that described yardstick information is the unique point correspondence.
The unique point that present embodiment obtains image to be identified shown in Figure 2 as shown in Figure 4, round dot wherein is exactly a unique point, the circle that comprises unique point is exactly the yardstick information of this unique point.
The unique point that present embodiment obtains the described template image of Fig. 3 storehouse as shown in Figure 5, round dot wherein is exactly a unique point, the circle that comprises unique point is exactly the yardstick information of this unique point.
Step 3 is carried out SURF feature extraction and LBP feature extraction to each unique point of template image, obtains the SURF feature and the LBP feature of each unique point in the template image; Each unique point of image to be identified is carried out SURF feature extraction and LBP feature extraction, obtain the SURF proper vector and the LBP proper vector of each unique point in the image to be identified.
Described SURF feature extraction, specifically: unique point in the image is carried out the principal direction rotation, and the neighborhood of 6 times of its place yardsticks around this unique point is divided into 16 of 4*4, equally distributed 25 pixels in each piece are carried out the haar wavelet transformation, with the dx that obtains, | dx|, dy, | dy| adds up respectively, thereby each piece obtains 4 eigenwerts, around this unique point the neighborhood of 6 times of its place yardsticks altogether 16*4 be 64 eigenwerts, these 64 eigenwerts are just formed the SURF proper vector of this unique point.
Described principal direction rotation, specifically: the pixel in the neighborhood of 6 times of its place yardsticks around the unique point in the image is done the haar wavelet transformation, obtain the haar small echo response of each pixel in this neighborhood; In this neighborhood, rotate around this unique point with 15 ° step-length with a window that covers 30 °, the little wave response of the haar component sum in the x and y direction of pixel in the zone that statistical window covers, the component sum forms the new vector of this neighborhood, the direction of the vectorial indication of length maximum is exactly the principal direction of this unique point in all these vectors, and this characteristic point coordinates system is rotated to be principal direction.
Described LBP feature extraction, specifically: 8 gray values of pixel points around each unique point and its in the movement images, the weight that gray-scale value is greater than or equal to the pixel of this unique point gray-scale value is 10, gray-scale value is 00 less than the weight of the pixel of this unique point gray-scale value, begin the reverse weight that reads these 8 pixels from the pixel in the upper left corner of unique point, the 8bit vector that these weights are formed is exactly the LBP proper vector of this unique point.
Step 4 is carried out the SURF characteristic matching with the set of SURF proper vector and the SURF proper vector set of template image of image to be identified, obtains the template image of pre-matching success.
Described SURF characteristic matching, specifically:
1) obtains in image to be identified and the template image Euclidean distance of the SURF proper vector between every pair of corresponding unique point respectively, as the Euclidean distance D of minimum
1With inferior little Euclidean distance D
2Difference less than threshold value T
1The time, this unique point in this template image is labeled as the unique point of pre-matching success; Otherwise this unique point is labeled as the unsuccessful unique point of pre-matching;
2) in the number of the unique point of pre-matching success in the template image and this template image the ratio of the number of the unsuccessful unique point of pre-matching greater than threshold value T
2The time, this template image is labeled as the template image of pre-matching success.
Threshold value T in the present embodiment
1Be 0.65, threshold value T
2Be 0.2.
Step 5 is carried out the LBP characteristic matching with the set of LBP proper vector and the LBP proper vector set of the template image of pre-matching success of image to be identified, obtains the secondary template image that the match is successful.
Described LBP characteristic matching, specifically:
1) obtains the Euclidean distance of the LBP proper vector between every pair of corresponding in the template image of image to be identified and pre-matching success unique point respectively, as the Euclidean distance D ' of minimum
1With inferior little Euclidean distance D '
2Difference less than threshold value T
1The time, this unique point in the template image of this pre-matching success is labeled as the secondary unique point that the match is successful; Otherwise this unique point is labeled as secondary and mates unsuccessful unique point;
2) secondary mates the ratio of number of unsuccessful unique point greater than threshold value T in the template image of the number of the secondary unique point that the match is successful in the template image of pre-matching success and this pre-matching success
3The time, the template image of this pre-matching success is labeled as the secondary template image that the match is successful.
Threshold value T in the present embodiment
3Be 0.25.
Step 6 is weighted processing to the secondary template image that the match is successful, obtains the weighted value of each secondary template image that the match is successful, and wherein the template image of weighted value maximum is exactly the template image that mates most with image to be identified.
Described weighted is:
Wherein: Q is the weighted value of the secondary template image that the match is successful, D
1Be that this secondary template image that the match is successful carries out the minimum euclidean distance between unique point in the SURF characteristic matching, D
2Be that this secondary template image that the match is successful carries out the inferior little Euclidean distance between unique point in the SURF characteristic matching, D '
1Be that this secondary template image that the match is successful carries out the minimum euclidean distance between unique point in the LBP characteristic matching, D '
2Be that this secondary template image that the match is successful carries out the inferior little Euclidean distance between unique point in the LBP characteristic matching.
The weighted value maximum of the template image in the lower left corner in the template image storehouse in the present embodiment (being Fig. 3), concrete weighted value is 4.1, its with the match condition of image to be identified as shown in Figure 6, the point that links to each other through black line wherein is exactly that the unique point of mating is right.
Adopt the present embodiment method that 100 image and 50 images that do not contain the Beijing Olympic sign that contain the Beijing Olympic sign are discerned, discrimination has reached 91%, false drop rate is 5.3%, each is opened image size to be detected and is 640*480, it is consuming time on average about 1 second from feature point detection to the proper vector that calculates its characteristic point that each opens image, thereby can satisfy actual identification demand.
Claims (10)
1. a method for identifying information of image tag is characterized in that, may further comprise the steps:
Step 1 is collected some template images, sets up the template image storehouse;
Step 2 is carried out the Hessian feature point detection to template image, obtains the unique point of template image; Image to be identified is carried out the Hessian feature point detection, obtain the unique point of image to be identified; With the top left corner pixel point be initial point, laterally be to the right x axle positive dirction, vertically downwards be that y axle positive dirction is set up rectangular coordinate system to template image and image to be identified respectively, obtain the positional information and the yardstick information of each unique point of each unique point of template image and image to be identified;
Step 3 is carried out SURF feature extraction and LBP feature extraction to each unique point of template image, obtains the SURF feature and the LBP feature of each unique point in the template image; Each unique point of image to be identified is carried out SURF feature extraction and LBP feature extraction, obtain the SURF proper vector and the LBP proper vector of each unique point in the image to be identified;
Step 4 is carried out the SURF characteristic matching with the set of SURF proper vector and the SURF proper vector set of template image of image to be identified, obtains the template image of pre-matching success;
Step 5 is carried out the LBP characteristic matching with the set of LBP proper vector and the LBP proper vector set of the template image of pre-matching success of image to be identified, obtains the secondary template image that the match is successful;
Step 6 is weighted processing to the secondary template image that the match is successful, obtains the weighted value of each secondary template image that the match is successful, and wherein the template image of weighted value maximum is exactly the template image that mates most with image to be identified.
2. method for identifying information of image tag according to claim 1, it is characterized in that, Hessian feature point detection described in the step 2, be: use the 2-d gaussian filters device that each pixel of image is carried out convolution, obtain each pixel at the Hessian of some metric spaces entry of a matrix element, and then obtaining each pixel Hessian matrix determinant in each metric space respectively, Hessian matrix determinant and minimum Hessian matrix determinant institute corresponding pixel points maximum in each metric space are exactly the unique point of this image.
3. method for identifying information of image tag according to claim 1, it is characterized in that, SURF feature extraction described in the step 3, be: unique point in the image is carried out the principal direction rotation, and the neighborhood of 6 times of its place yardsticks around this unique point is divided into 16 of 4*4, equally distributed 25 pixels in each piece are carried out the haar wavelet transformation, with the dx that obtains, | dx|, dy and | dy| adds up respectively, thereby each piece obtains 4 eigenwerts, around this unique point the neighborhood of 6 times of its place yardsticks altogether 16*4 be 64 eigenwerts, these 64 eigenwerts are just formed the SURF proper vector of this unique point.
4. method for identifying information of image tag according to claim 1, it is characterized in that, LBP feature extraction described in the step 3, be: 8 gray values of pixel points around each unique point and its in the movement images, the weight that gray-scale value is greater than or equal to the pixel of this unique point gray-scale value is 10, gray-scale value is 00 less than the weight of the pixel of this unique point gray-scale value, begin the reverse weight that reads these 8 pixels from the pixel in the upper left corner of unique point, the 8bit vector that these weights are formed is exactly the LBP proper vector of this unique point.
5. method for identifying information of image tag according to claim 1 is characterized in that, the described SURF characteristic matching of step 4 is:
1) obtains in image to be identified and the template image Euclidean distance of the SURF proper vector between every pair of corresponding unique point respectively, as the Euclidean distance D of minimum
1With inferior little Euclidean distance D
2Difference less than threshold value T
1The time, this unique point in this template image is labeled as the unique point of pre-matching success; Otherwise this unique point is labeled as the unsuccessful unique point of pre-matching;
2) in the number of the unique point of pre-matching success in the template image and this template image the ratio of the number of the unsuccessful unique point of pre-matching greater than threshold value T
2The time, this template image is labeled as the template image of pre-matching success.
6. method for identifying information of image tag according to claim 5 is characterized in that, described threshold value T
2Span be: 0.15~0.3.
7. method for identifying information of image tag according to claim 1 is characterized in that, the LBP characteristic matching described in the step 5 is:
1) obtains the Euclidean distance of the LBP proper vector between every pair of corresponding in the template image of image to be identified and pre-matching success unique point respectively, as the Euclidean distance D ' of minimum
1With inferior little Euclidean distance D '
2Difference less than threshold value T
1The time, this unique point in the template image of this pre-matching success is labeled as the secondary unique point that the match is successful; Otherwise this unique point is labeled as secondary and mates unsuccessful unique point;
2) secondary mates the ratio of number of unsuccessful unique point greater than threshold value T in the template image of the number of the secondary unique point that the match is successful in the template image of pre-matching success and this pre-matching success
3The time, the template image of this pre-matching success is labeled as the secondary template image that the match is successful.
8. according to claim 5 or 7 described method for identifying information of image tag, it is characterized in that described threshold value T
1Span be: 0.5~0.7.
9. method for identifying information of image tag according to claim 7 is characterized in that, described threshold value T
3Span be: 0.2~0.4.
10. method for identifying information of image tag according to claim 1 is characterized in that, the weighted described in the step 6 is:
Wherein: Q is the weighted value of the secondary template image that the match is successful, D
1Be that this secondary template image that the match is successful carries out the minimum euclidean distance between unique point in the SURF characteristic matching, D
2Be that this secondary template image that the match is successful carries out the inferior little Euclidean distance between unique point in the SURF characteristic matching, D '
1Be that this secondary template image that the match is successful carries out the minimum euclidean distance between unique point in the LBP characteristic matching, D '
2Be that this secondary template image that the match is successful carries out the inferior little Euclidean distance between unique point in the LBP characteristic matching.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010169350A CN101807257A (en) | 2010-05-12 | 2010-05-12 | Method for identifying information of image tag |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010169350A CN101807257A (en) | 2010-05-12 | 2010-05-12 | Method for identifying information of image tag |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101807257A true CN101807257A (en) | 2010-08-18 |
Family
ID=42609045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201010169350A Pending CN101807257A (en) | 2010-05-12 | 2010-05-12 | Method for identifying information of image tag |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101807257A (en) |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102004910A (en) * | 2010-12-03 | 2011-04-06 | 上海交通大学 | Video target tracking method based on SURF (speeded-up robust features) feature point diagram matching and motion generating model |
CN102169601A (en) * | 2011-01-24 | 2011-08-31 | 北京北大千方科技有限公司 | Anti-dismantling method and system of on-board unit (OBU) as well as OBU |
CN102567736A (en) * | 2010-12-14 | 2012-07-11 | 三星电子株式会社 | Device and method for recognizing image |
CN102865859A (en) * | 2012-09-21 | 2013-01-09 | 西北工业大学 | Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features) |
CN103020625A (en) * | 2011-09-26 | 2013-04-03 | 华为软件技术有限公司 | Local image characteristic generation method and device |
CN103186762A (en) * | 2011-12-28 | 2013-07-03 | 天津市亚安科技股份有限公司 | License plate character recognition method based on SURF matching algorithm |
CN103426186A (en) * | 2013-09-05 | 2013-12-04 | 山东大学 | Improved SURF fast matching method |
CN103902987A (en) * | 2014-04-17 | 2014-07-02 | 福州大学 | Station caption identifying method based on convolutional network |
CN104239874A (en) * | 2014-09-29 | 2014-12-24 | 青岛海信医疗设备股份有限公司 | Method and device for identifying organ blood vessels |
CN104376548A (en) * | 2014-11-07 | 2015-02-25 | 中国电子科技集团公司第二十八研究所 | Fast image splicing method based on improved SURF algorithm |
CN104537376A (en) * | 2014-11-25 | 2015-04-22 | 深圳创维数字技术有限公司 | A method, a relevant device, and a system for identifying a station caption |
CN104780362A (en) * | 2015-04-24 | 2015-07-15 | 宏祐图像科技(上海)有限公司 | Video static logo detecting method based on local feature description |
CN103676976B (en) * | 2013-12-23 | 2016-01-13 | 中国地质科学院地质研究所 | The bearing calibration of three-dimensional working platform resetting error |
CN105528610A (en) * | 2014-09-30 | 2016-04-27 | 阿里巴巴集团控股有限公司 | Character recognition method and device |
CN105554570A (en) * | 2015-12-31 | 2016-05-04 | 北京奇艺世纪科技有限公司 | Copyrighted video monitoring method and device |
CN105681899A (en) * | 2015-12-31 | 2016-06-15 | 北京奇艺世纪科技有限公司 | Method and device for detecting similar video and pirated video |
CN105975621A (en) * | 2016-05-25 | 2016-09-28 | 北京小米移动软件有限公司 | Method and device for recognizing search engine in browser page |
CN106327483A (en) * | 2016-08-12 | 2017-01-11 | 广州视源电子科技股份有限公司 | Method, system and device for attaching logo of detection equipment |
TWI567655B (en) * | 2016-02-04 | 2017-01-21 | Calin Technology Co Ltd | Object of two - dimensional code discrimination method |
CN106898017A (en) * | 2017-02-27 | 2017-06-27 | 网易(杭州)网络有限公司 | Method, device and terminal device for recognizing image local area |
CN107180230A (en) * | 2017-05-08 | 2017-09-19 | 上海理工大学 | General licence plate recognition method |
CN107798325A (en) * | 2017-08-18 | 2018-03-13 | 中国银联股份有限公司 | Card identification method and equipment, computer-readable storage medium |
CN108960412A (en) * | 2018-06-29 | 2018-12-07 | 北京京东尚科信息技术有限公司 | Image-recognizing method, device and computer readable storage medium |
CN108960280A (en) * | 2018-05-21 | 2018-12-07 | 北京中科闻歌科技股份有限公司 | A kind of picture similarity detection method and system |
CN109086764A (en) * | 2018-07-25 | 2018-12-25 | 北京达佳互联信息技术有限公司 | Station caption detection method, device and storage medium |
CN109447023A (en) * | 2018-11-08 | 2019-03-08 | 北京奇艺世纪科技有限公司 | Determine method, video scene switching recognition methods and the device of image similarity |
CN109978132A (en) * | 2018-12-24 | 2019-07-05 | 中国科学院深圳先进技术研究院 | A kind of neural network method and system refining vehicle identification |
CN110287847A (en) * | 2019-06-19 | 2019-09-27 | 长安大学 | Vehicle grading search method based on Alexnet-CLbpSurf multiple features fusion |
CN110472643A (en) * | 2019-08-20 | 2019-11-19 | 山东浪潮人工智能研究院有限公司 | A kind of optical imagery employee's card identification method based on Feature Points Matching |
CN111597885A (en) * | 2020-04-07 | 2020-08-28 | 上海推乐信息技术服务有限公司 | Video additional content detection method and system |
CN112633305A (en) * | 2019-09-24 | 2021-04-09 | 深圳云天励飞技术有限公司 | Key point marking method and related equipment |
CN113112503A (en) * | 2021-05-10 | 2021-07-13 | 上海贝德尔生物科技有限公司 | Method for realizing automatic detection of medicine label based on machine vision |
CN113379999A (en) * | 2021-06-22 | 2021-09-10 | 徐州才聚智能科技有限公司 | Fire detection method and device, electronic equipment and storage medium |
CN113469216A (en) * | 2021-05-31 | 2021-10-01 | 浙江中烟工业有限责任公司 | Retail terminal poster identification and integrity judgment method, system and storage medium |
CN115331212A (en) * | 2022-10-13 | 2022-11-11 | 南通东鼎彩印包装厂 | Method for identifying abnormal code spraying of zip-top can bottom |
CN117889867A (en) * | 2024-03-18 | 2024-04-16 | 南京师范大学 | Path planning method based on local self-attention moving window algorithm |
CN118577517A (en) * | 2024-08-02 | 2024-09-03 | 成都普什信息自动化有限公司 | Intelligent labeling detection method and system |
-
2010
- 2010-05-12 CN CN201010169350A patent/CN101807257A/en active Pending
Cited By (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102004910A (en) * | 2010-12-03 | 2011-04-06 | 上海交通大学 | Video target tracking method based on SURF (speeded-up robust features) feature point diagram matching and motion generating model |
CN102567736A (en) * | 2010-12-14 | 2012-07-11 | 三星电子株式会社 | Device and method for recognizing image |
CN102169601A (en) * | 2011-01-24 | 2011-08-31 | 北京北大千方科技有限公司 | Anti-dismantling method and system of on-board unit (OBU) as well as OBU |
CN102169601B (en) * | 2011-01-24 | 2013-10-09 | 北京北大千方科技有限公司 | Anti-dismantling method and system of on-board electronic tag as well as on-board electronic tag |
CN103020625A (en) * | 2011-09-26 | 2013-04-03 | 华为软件技术有限公司 | Local image characteristic generation method and device |
CN103186762A (en) * | 2011-12-28 | 2013-07-03 | 天津市亚安科技股份有限公司 | License plate character recognition method based on SURF matching algorithm |
CN102865859A (en) * | 2012-09-21 | 2013-01-09 | 西北工业大学 | Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features) |
CN102865859B (en) * | 2012-09-21 | 2014-11-05 | 西北工业大学 | Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features) |
CN103426186A (en) * | 2013-09-05 | 2013-12-04 | 山东大学 | Improved SURF fast matching method |
CN103426186B (en) * | 2013-09-05 | 2016-03-02 | 山东大学 | A kind of SURF fast matching method of improvement |
CN103676976B (en) * | 2013-12-23 | 2016-01-13 | 中国地质科学院地质研究所 | The bearing calibration of three-dimensional working platform resetting error |
CN103902987B (en) * | 2014-04-17 | 2017-10-20 | 福州大学 | A kind of TV station symbol recognition method based on convolutional network |
CN103902987A (en) * | 2014-04-17 | 2014-07-02 | 福州大学 | Station caption identifying method based on convolutional network |
CN104239874B (en) * | 2014-09-29 | 2017-11-03 | 青岛海信医疗设备股份有限公司 | A kind of organ blood vessel recognition methods and device |
CN104239874A (en) * | 2014-09-29 | 2014-12-24 | 青岛海信医疗设备股份有限公司 | Method and device for identifying organ blood vessels |
CN105528610A (en) * | 2014-09-30 | 2016-04-27 | 阿里巴巴集团控股有限公司 | Character recognition method and device |
CN105528610B (en) * | 2014-09-30 | 2019-05-07 | 阿里巴巴集团控股有限公司 | Character recognition method and device |
CN104376548A (en) * | 2014-11-07 | 2015-02-25 | 中国电子科技集团公司第二十八研究所 | Fast image splicing method based on improved SURF algorithm |
CN104537376A (en) * | 2014-11-25 | 2015-04-22 | 深圳创维数字技术有限公司 | A method, a relevant device, and a system for identifying a station caption |
CN104537376B (en) * | 2014-11-25 | 2018-04-27 | 深圳创维数字技术有限公司 | One kind identification platform calibration method and relevant device, system |
CN104780362A (en) * | 2015-04-24 | 2015-07-15 | 宏祐图像科技(上海)有限公司 | Video static logo detecting method based on local feature description |
CN105554570A (en) * | 2015-12-31 | 2016-05-04 | 北京奇艺世纪科技有限公司 | Copyrighted video monitoring method and device |
CN105554570B (en) * | 2015-12-31 | 2019-04-12 | 北京奇艺世纪科技有限公司 | A kind of copyright video monitoring method and device |
CN105681899B (en) * | 2015-12-31 | 2019-05-10 | 北京奇艺世纪科技有限公司 | A kind of detection method and device of similar video and pirate video |
CN105681899A (en) * | 2015-12-31 | 2016-06-15 | 北京奇艺世纪科技有限公司 | Method and device for detecting similar video and pirated video |
TWI567655B (en) * | 2016-02-04 | 2017-01-21 | Calin Technology Co Ltd | Object of two - dimensional code discrimination method |
CN105975621B (en) * | 2016-05-25 | 2019-12-13 | 北京小米移动软件有限公司 | Method and device for identifying search engine in browser page |
CN105975621A (en) * | 2016-05-25 | 2016-09-28 | 北京小米移动软件有限公司 | Method and device for recognizing search engine in browser page |
CN106327483A (en) * | 2016-08-12 | 2017-01-11 | 广州视源电子科技股份有限公司 | Method, system and device for attaching logo of detection equipment |
CN106898017A (en) * | 2017-02-27 | 2017-06-27 | 网易(杭州)网络有限公司 | Method, device and terminal device for recognizing image local area |
CN106898017B (en) * | 2017-02-27 | 2019-05-31 | 网易(杭州)网络有限公司 | The method, apparatus and terminal device of image local area for identification |
CN107180230B (en) * | 2017-05-08 | 2020-06-23 | 上海理工大学 | Universal license plate recognition method |
CN107180230A (en) * | 2017-05-08 | 2017-09-19 | 上海理工大学 | General licence plate recognition method |
CN107798325B (en) * | 2017-08-18 | 2021-04-16 | 中国银联股份有限公司 | Card recognition method and apparatus, computer storage medium |
CN107798325A (en) * | 2017-08-18 | 2018-03-13 | 中国银联股份有限公司 | Card identification method and equipment, computer-readable storage medium |
CN108960280A (en) * | 2018-05-21 | 2018-12-07 | 北京中科闻歌科技股份有限公司 | A kind of picture similarity detection method and system |
CN108960280B (en) * | 2018-05-21 | 2020-07-24 | 北京中科闻歌科技股份有限公司 | Picture similarity detection method and system |
CN108960412A (en) * | 2018-06-29 | 2018-12-07 | 北京京东尚科信息技术有限公司 | Image-recognizing method, device and computer readable storage medium |
CN109086764A (en) * | 2018-07-25 | 2018-12-25 | 北京达佳互联信息技术有限公司 | Station caption detection method, device and storage medium |
CN109447023A (en) * | 2018-11-08 | 2019-03-08 | 北京奇艺世纪科技有限公司 | Determine method, video scene switching recognition methods and the device of image similarity |
CN109978132A (en) * | 2018-12-24 | 2019-07-05 | 中国科学院深圳先进技术研究院 | A kind of neural network method and system refining vehicle identification |
CN110287847A (en) * | 2019-06-19 | 2019-09-27 | 长安大学 | Vehicle grading search method based on Alexnet-CLbpSurf multiple features fusion |
CN110472643A (en) * | 2019-08-20 | 2019-11-19 | 山东浪潮人工智能研究院有限公司 | A kind of optical imagery employee's card identification method based on Feature Points Matching |
CN112633305A (en) * | 2019-09-24 | 2021-04-09 | 深圳云天励飞技术有限公司 | Key point marking method and related equipment |
CN111597885A (en) * | 2020-04-07 | 2020-08-28 | 上海推乐信息技术服务有限公司 | Video additional content detection method and system |
CN113112503B (en) * | 2021-05-10 | 2022-11-22 | 上海合乐医疗科技有限公司 | Method for realizing automatic detection of medicine label based on machine vision |
CN113112503A (en) * | 2021-05-10 | 2021-07-13 | 上海贝德尔生物科技有限公司 | Method for realizing automatic detection of medicine label based on machine vision |
CN113469216B (en) * | 2021-05-31 | 2024-02-23 | 浙江中烟工业有限责任公司 | Retail terminal poster identification and integrity judgment method, system and storage medium |
CN113469216A (en) * | 2021-05-31 | 2021-10-01 | 浙江中烟工业有限责任公司 | Retail terminal poster identification and integrity judgment method, system and storage medium |
CN113379999A (en) * | 2021-06-22 | 2021-09-10 | 徐州才聚智能科技有限公司 | Fire detection method and device, electronic equipment and storage medium |
CN113379999B (en) * | 2021-06-22 | 2024-05-24 | 徐州才聚智能科技有限公司 | Fire detection method, device, electronic equipment and storage medium |
CN115331212A (en) * | 2022-10-13 | 2022-11-11 | 南通东鼎彩印包装厂 | Method for identifying abnormal code spraying of zip-top can bottom |
CN115331212B (en) * | 2022-10-13 | 2023-10-27 | 南通东鼎彩印包装厂 | Method for identifying abnormal spraying code at bottom of pop can |
CN117889867A (en) * | 2024-03-18 | 2024-04-16 | 南京师范大学 | Path planning method based on local self-attention moving window algorithm |
CN117889867B (en) * | 2024-03-18 | 2024-05-24 | 南京师范大学 | Path planning method based on local self-attention moving window algorithm |
CN118577517A (en) * | 2024-08-02 | 2024-09-03 | 成都普什信息自动化有限公司 | Intelligent labeling detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101807257A (en) | Method for identifying information of image tag | |
Patel et al. | Automatic number plate recognition system (anpr): A survey | |
Luo et al. | Design and implementation of a card reader based on build-in camera | |
CN102609686B (en) | Pedestrian detection method | |
Bhattacharya et al. | Devanagari and bangla text extraction from natural scene images | |
CN107103317A (en) | Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution | |
CN106815583B (en) | Method for positioning license plate of vehicle at night based on combination of MSER and SWT | |
CN105046252A (en) | Method for recognizing Renminbi (Chinese currency yuan) crown codes | |
CN107563380A (en) | A kind of vehicle license plate detection recognition method being combined based on MSER and SWT | |
CN114155527A (en) | Scene text recognition method and device | |
CN108154151B (en) | Rapid multi-direction text line detection method | |
CN103530590A (en) | DPM (direct part mark) two-dimensional code recognition system | |
CN101339601A (en) | License plate Chinese character recognition method based on SIFT algorithm | |
CN104408449A (en) | Intelligent mobile terminal scene character processing method | |
CN104657728A (en) | Barcode recognition system based on computer vision | |
CN103699876B (en) | Method and device for identifying vehicle number based on linear array CCD (Charge Coupled Device) images | |
CN115810197A (en) | Multi-mode electric power form recognition method and device | |
Budianto | Automatic License Plate Recognition: A Review with Indonesian Case Study | |
CN108427954B (en) | Label information acquisition and recognition system | |
CN104346596A (en) | Identification method and identification device for QR (Quick Response) code | |
CN110766001B (en) | Bank card number positioning and end-to-end identification method based on CNN and RNN | |
Lokkondra et al. | DEFUSE: deep fused end-to-end video text detection and recognition | |
CN115082923B (en) | Milk packing box production date identification method based on machine vision | |
CN103235951A (en) | Preliminary positioning method for matrix type two-dimensional bar code | |
CN112288372B (en) | Express bill identification method capable of simultaneously identifying one-dimensional bar code and three-segment code characters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20100818 |