CN111091133B - Bronze ware gold image recognition method based on sift algorithm - Google Patents

Bronze ware gold image recognition method based on sift algorithm Download PDF

Info

Publication number
CN111091133B
CN111091133B CN201911069702.4A CN201911069702A CN111091133B CN 111091133 B CN111091133 B CN 111091133B CN 201911069702 A CN201911069702 A CN 201911069702A CN 111091133 B CN111091133 B CN 111091133B
Authority
CN
China
Prior art keywords
image
bronze
matching
sift
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911069702.4A
Other languages
Chinese (zh)
Other versions
CN111091133A (en
Inventor
王慧琴
王可
赵若晴
商立丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Qinghechuang Intelligent Technology Co.,Ltd.
Original Assignee
Xian University of Architecture and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Architecture and Technology filed Critical Xian University of Architecture and Technology
Priority to CN201911069702.4A priority Critical patent/CN111091133B/en
Publication of CN111091133A publication Critical patent/CN111091133A/en
Application granted granted Critical
Publication of CN111091133B publication Critical patent/CN111091133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a bronze image recognition method based on a sift algorithm, which comprises the steps of firstly, acquiring bronze rubbing image data of a bronze by using an image segmentation method, and establishing a bronze data set; then, detecting a bronze rubbing image sift characteristic point by utilizing an improved sift characteristic extraction algorithm, and describing the characteristic point; and finally, carrying out characteristic point matching on the rubbing image characteristic points obtained in the step two by using a matching method of cosine values of vector included angles, and obtaining an image matching result. Because the improved sift feature extraction algorithm is adopted, the operation complexity is reduced through dimension reduction, the real-time performance is greatly improved, the matching accuracy and the time complexity are obviously superior to those of the traditional method, and the method is more suitable for bronze ware gold image recognition matching. The precision of the matching value of the golden text image is improved, the time complexity is reduced, and the golden text image can be effectively identified.

Description

Bronze ware gold image recognition method based on sift algorithm
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a bronze image recognition method based on a sift algorithm.
Background
The study of bronze articles belongs to an important content of ancient literal science. Only according to the scientific research means of ancient literal science, the character can be deeply understood by specifically researching the character of the bronze ware inscription in a zigzag form, the custom of the congratulation, the sentence and grammar and the evolution process thereof in each history stage. In short, reading bronze legends requires researchers to have a broad knowledge base and training, a very challenging task.
The sift algorithm adopts an image local feature description operator which is based on scale space and keeps invariance to image scaling, rotation and even affine to extract feature points, and is widely applied to the field of image processing. With the development of computer vision, the registration method based on image feature points is the main stream direction and development trend of the current image matching technology. Therefore, a plurality of algorithms are proposed for extracting the characteristic points at home and abroad. The SURF algorithm was proposed by Herbert Bay et al in 2006, the BRSIK algorithm in 2011, stefan Leutenegger, etc., the 0RB algorithm in ethane rubble et al, and the frak algorithm in Alexandre Alahi et al, all of which are superior to the sift algorithm in terms of time complexity, but the sift algorithm is still widely used because the accuracy of the algorithm is generally superior to other algorithms. Some researchers in China propose a plurality of feature point detection algorithms, yang Xingfang propose a USN-based feature detection algorithm, wang Lizhong et al invent a multi-scale Harris feature detection algorithm based on image segmentation, and these new methods are lower in time consumption than the original sift algorithm, but are not as accurate as the original sift algorithm.
For this reason, in order to find an algorithm for reducing the time complexity while ensuring the accuracy, it is one of the subjects of the applicant's research on bronze image recognition.
Disclosure of Invention
Aiming at the defects or shortcomings of the sift algorithm, the invention aims to provide a bronze image recognition method based on an improved sift algorithm. To improve search efficiency and matching accuracy.
In order to achieve the above task, the present invention adopts the following technical solutions:
the bronze image recognition method based on the improved sift algorithm is characterized by comprising the following steps of:
firstly, acquiring bronze rubbing image data of a bronze ware by using an image segmentation method, and establishing a bronze data set;
step two, detecting the characteristic points of the bronze rubbing image sift by utilizing an improved sift characteristic extraction algorithm, and describing the characteristic points;
and thirdly, carrying out characteristic point matching on the characteristic points of the rubbing image obtained in the second step by using a matching method of cosine values of vector included angles, and obtaining an image matching result.
According to the image segmentation method of the first step, the bronze rubbing image of the bronze ware is subjected to automatic threshold binarization processing, all rows and columns of which the black pixel points are larger than a certain set threshold value in the row are searched for, the image segmentation processing is performed, and bronze rubbing image data of the bronze ware is acquired.
Further, the step of performing the identification matching by using the improved sift feature extraction algorithm in the second step is as follows:
a) Firstly, taking a characteristic point as a center, extracting a concentric ring area with the radius of 8 as a characteristic point neighborhood range, taking two pixels as units, sequentially decreasing the radius, dividing the characteristic point neighborhood into concentric circles with 4 units, and each grid represents a pixel point;
the center of the concentric circle is used as a feature point, and the feature point is expressed as M (p 1 ,p 2 ) The diameter is at most 16, and the circular area can be expressed as:
(x-p 1 ) 2 +(x-p 2 ) 2 =r 2 (1)
after the image is rotated, only the pixel position of the pixels in the same circular ring is changed, and other relative characteristics of the pixels are basically unchanged, so that the pixels have good rotation invariance;
b) Calculating the module and direction of the gradient of each pixel, counting 12 gradient direction accumulated values in each ring by using a gradient histogram to form 1 seed point, wherein each seed point is rich in vector information of 12 directions, and generating 4 multiplied by 12=48 dimension feature vectors in total;
c) In order to avoid sudden change of the feature descriptors caused by small displacement of the feature point positioning, the feature descriptors are ordered in a cyclic left-shifting mode, the maximum value of the innermost circular ring is shifted to the position of the first pixel point to the left, and other concentric circles are sequentially rotated, so that the ordering value is not changed after the image is rotated at any angle;
d) Finally, the vector is normalized, so that the influence of illumination change can be further reduced, M 'is set as a characteristic point descriptor, and M' = (M '' 1 ,m' 2 ,...,m' 48 ) The normalization formula is:
Figure BDA0002260567310000031
e) When the similarity degree between two vectors is measured, a similarity function can be adopted, and the smaller the function value is, the larger the vector difference is, and the smaller the similarity is;
the similarity is measured by using cosine values of vector included angles, and the larger the cosine values are, the smaller the included angle between two vectors is, and the higher the similarity between the vectors is; the cosine value between the two vectors is deduced by the Euclidean dot product and magnitude formula as follows:
A·B=||A||||B||cosθ (3)。
according to the bronze image recognition method based on the sift algorithm, as the improved sift feature extraction algorithm is adopted, the operation complexity is reduced through dimension reduction, the time efficiency is higher than that of the traditional algorithm, the instantaneity is greatly improved, the matching accuracy and the time complexity are obviously superior to those of the traditional method, and the bronze image recognition method based on the sift algorithm is more suitable for bronze image recognition matching.
By constructing the feature descriptors of the circular subareas, the feature vector dimension is reduced, and finally, a new sift feature descriptor is constructed, so that the precision of the matching value of the golden text image is improved, the time complexity is reduced, and the golden text image can be effectively identified.
Drawings
FIG. 1 is a flow chart of a modified sift image matching algorithm;
FIG. 2 is an improved feature descriptor, wherein (a) is a partitioning of an improved keypoint neighborhood; (b) graph is descriptor gradient direction;
fig. 3 is a graph comparing two sets of image experimental results with a conventional sift algorithm, wherein a graph a and a graph c are matching results of the conventional sift algorithm, and b graph d and a graph d are matching results of the improved sift algorithm.
The invention is described in further detail below with reference to the drawings and examples.
Detailed Description
Research finds that bronze images often have a plurality of noise points, and with the long-term evolution of legend fonts, each word has more than twenty variants on average, the morphology of each word is not fixed, and a feature extraction method with abstract mapping capability is needed. The applicant has therefore proposed an improved sift matching algorithm.
The embodiment provides a bronze image recognition method based on a sift algorithm, which specifically comprises the following steps:
1) Collecting bronze rubbing image data of a bronze ware by using an image segmentation method, and establishing a bronze data set;
2) Detecting a bronze rubbing image sift characteristic point by utilizing an improved sift algorithm, and describing the characteristic point;
3) And 3) carrying out characteristic point matching on the characteristic points of the rubbing image obtained in the step 2) by using a matching method of cosine values of vector included angles, and obtaining an image matching result.
In step 1), the image segmentation method is as follows: and (3) carrying out automatic threshold binarization processing on the image, searching all rows and columns of which the black pixel points are larger than a certain threshold value in the row, carrying out image segmentation processing, and acquiring bronze rubbing image data of the bronze ware.
On the basis of the traditional sift algorithm, because the algorithm has larger calculated amount for high-dimensional vectors and increases the complexity of matching time, the inventor adopts an improved sift feature extraction algorithm in the step 2), and the improved sift feature extraction algorithm reduces the dimension on the basis of a sift feature descriptor, thereby achieving the advantages of high processing speed, strong robustness and the like to improve the matching precision. After the dimension reduction of the sift feature descriptor, the matching time is reduced. The traditional sift algorithm has more mismatching in the data set, and the improved sift feature extraction algorithm has good uniqueness, anti-rotation capability and noise immunity.
The steps of identification matching by adopting the improved sift feature extraction algorithm are as follows:
a) Firstly, taking a characteristic point as a center, extracting a concentric ring area with the radius of 8 as a neighborhood range of the characteristic point, taking two pixels as units, sequentially decreasing the radius, dividing the neighborhood of the characteristic point into concentric circles with 4 units, and each grid represents one pixel point.
The feature point is represented as M (p) with the center of the concentric circle (FIG. 2) as the feature point 1 ,p 2 ) The diameter is at most 16, and the circular area can be expressed as:
(x-p 1 ) 2 +(x-p 2 ) 2 =r 2 (1)
the pixels in the same circular ring only change the pixel position after the image rotates, and other relative characteristics of the pixels are basically unchanged, so that the image has good rotation invariance.
b) And calculating the modulus and the direction of the gradient of each pixel, counting 12 gradient direction accumulated values in each ring by using a gradient histogram to form 1 seed point, wherein each seed point is rich in vector information of 12 directions, and generating a total of 4 multiplied by 12=48 dimension feature vectors.
The new feature descriptor performs weighted calculation on different rings, so that the feature expression of the key points is more specific, and compared with the sift feature descriptor, the finally obtained feature vector reduces the calculation complexity and saves the calculation time.
c) In order to avoid abrupt change of feature descriptors caused by small displacement of feature point positioning, the feature descriptors are ordered in a cyclic left-shifting mode, the maximum value of the innermost circular ring is shifted to the position of the first pixel point to the left, other concentric circles are sequentially rotated, and therefore the ordering value is not changed after the image is rotated at any angle.
d) Finally, the vector is normalized, so that the influence of illumination change can be further reduced. Let M ' be the feature point descriptor, and M ' = (M ' 1 ,m' 2 ,...,m' 48 ) The normalization formula is:
Figure BDA0002260567310000061
e) When the similarity degree between two vectors is measured, a similarity function can be adopted, and the smaller the function value is, the larger the vector difference is, and the smaller the similarity is.
The cosine value of the vector included angle is used for measuring the similarity, namely cosine similarity measurement. The larger the cosine value is, the smaller the included angle between the two vectors is, and the higher the similarity between the vectors is; the cosine value between the two vectors can be deduced from the Euclidean dot product and magnitude formula as follows:
A·B=||A||||B||cosθ (3)。
the inventors used 2 sets of comparative experimental analyses, and the improved sift feature extraction algorithm proposed in this example is shown in fig. 3 (a-d). And performing a comparison experiment of feature matching and optimization on the experimental picture, and objectively evaluating four algorithms from three aspects of matching logarithm, accuracy and consumption time. The experimental results of the modified sift feature extraction algorithm are compared with the conventional sift algorithm, and in fig. 3, a and c are the matching results of the conventional sift algorithm, and b and d are the matching results of the modified sift feature extraction algorithm.
Table 1 shows the comparison experimental results of the modified sift feature extraction algorithm and the conventional sift algorithm.
Table 1: comparison of matching results of two algorithms
Figure BDA0002260567310000071
As can be seen from table 1, the time efficiency of the modified sift feature extraction algorithm is higher than that of the traditional sift algorithm, which indicates that the modified sift algorithm has higher time efficiency than that of the traditional algorithm, the modified algorithm reduces the operation complexity by dimension reduction, the instantaneity is greatly improved, and the quick matching of bronze images of bronze wares can be better completed. The method is obviously superior to the traditional method in matching accuracy and time complexity, and is more suitable for bronze ware gold image recognition matching. Compared with the original algorithm, the improved sift feature extraction algorithm has good matching effect, the number of feature point pairs is reduced, but the matching accuracy can be improved, and the high accuracy of the algorithm is ensured.

Claims (2)

1. A bronze image recognition method based on a sift algorithm is characterized by comprising the following steps:
firstly, acquiring bronze rubbing image data of a bronze ware by using an image segmentation method, and establishing a bronze data set;
step two, detecting the characteristic points of the bronze rubbing image sift by utilizing an improved sift characteristic extraction algorithm, and describing the characteristic points;
the steps of identification matching using the improved sift feature extraction algorithm are as follows:
a) Firstly, taking a characteristic point as a center, extracting a concentric ring area with the radius of 8 as a characteristic point neighborhood range, taking two pixels as units, sequentially decreasing the radius, dividing the characteristic point neighborhood into concentric circles with 4 units, and each grid represents a pixel point;
the center of the concentric circle is used as a feature point, and the feature point is expressed as M (p 1 ,p 2 ) The diameter is at most 16, and the circular area can be expressed as:
(x-p 1 ) 2 +(x-p 2 ) 2 =r 2 (1)
after the image is rotated, only the pixel positions of the pixels in the same circular ring are changed, so that the pixels have good rotation invariance;
b) Calculating the module and direction of the gradient of each pixel, counting 12 gradient direction accumulated values in each ring by using a gradient histogram to form 1 seed point, wherein each seed point is rich in vector information of 12 directions, and generating 4 multiplied by 12=48 dimension feature vectors in total;
c) In order to avoid sudden change of the feature descriptors caused by small displacement of the feature point positioning, the feature descriptors are ordered in a cyclic left-shifting mode, the maximum value of the innermost circular ring is shifted to the position of the first pixel point to the left, and other concentric circles are sequentially rotated, so that the ordering value is not changed after the image is rotated at any angle;
d) Finally, the vector is normalized, so that the influence of illumination change can be further reduced, M 'is set as a characteristic point descriptor, and M' = (M '' 1 ,m' 2 ,...,m' 48 ) The normalization formula is:
Figure FDA0004055625110000021
e) When the similarity degree between two vectors is measured, a similarity function is adopted, the smaller the function value is, the larger the vector difference is, and the similarity is smaller;
the similarity is measured by using cosine values of vector included angles, and the larger the cosine values are, the smaller the included angle between two vectors is, and the higher the similarity between the vectors is; the cosine value between the two vectors is deduced by the Euclidean dot product and magnitude formula as follows:
A·B=ABcosθ (3);
and thirdly, carrying out characteristic point matching on the characteristic points of the rubbing image obtained in the second step by using a matching method of cosine values of vector included angles, and obtaining an image matching result.
2. The method of claim 1, wherein the image segmentation method in step one is that automatic thresholding is performed on the bronze rubbing image, all rows and columns with black pixels larger than a certain set threshold in the row are searched for, and image segmentation is performed, so as to acquire bronze rubbing image data.
CN201911069702.4A 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm Active CN111091133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911069702.4A CN111091133B (en) 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911069702.4A CN111091133B (en) 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm

Publications (2)

Publication Number Publication Date
CN111091133A CN111091133A (en) 2020-05-01
CN111091133B true CN111091133B (en) 2023-05-30

Family

ID=70393081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911069702.4A Active CN111091133B (en) 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm

Country Status (1)

Country Link
CN (1) CN111091133B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183585A (en) * 2020-09-08 2021-01-05 西安建筑科技大学 Bronze ware inscription similarity measurement method based on multi-feature measurement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
WO2019134327A1 (en) * 2018-01-03 2019-07-11 东北大学 Facial expression recognition feature extraction method employing edge detection and sift
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
WO2019134327A1 (en) * 2018-01-03 2019-07-11 东北大学 Facial expression recognition feature extraction method employing edge detection and sift
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于余弦核函数的SIFT描述子改进算法;丁理想等;《图学学报》(第03期);全文 *
基于图像自相关矩阵的改进SIFT算法研究;刘光鑫;《中国新通信》(第08期);全文 *

Also Published As

Publication number Publication date
CN111091133A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
Saavedra et al. Sketch based Image Retrieval using Learned KeyShapes (LKS).
CN107145829B (en) Palm vein identification method integrating textural features and scale invariant features
CN106650580B (en) Goods shelf quick counting method based on image processing
CN107679512A (en) A kind of dynamic gesture identification method based on gesture key point
WO2023103372A1 (en) Recognition method in state of wearing mask on human face
Zagoris et al. Segmentation-based historical handwritten word spotting using document-specific local features
Jabid et al. Insulator detection and defect classification using rotation invariant local directional pattern
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
Liu et al. Finger vein recognition using optimal partitioning uniform rotation invariant LBP descriptor
CN111091133B (en) Bronze ware gold image recognition method based on sift algorithm
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
Zhu et al. Scene text detection via extremal region based double threshold convolutional network classification
Hu et al. A new finger vein recognition method based on LBP and 2DPCA
CN106874942B (en) Regular expression semantic-based target model rapid construction method
Xiao et al. Trajectories-based motion neighborhood feature for human action recognition
Gui et al. A fast caption detection method for low quality video images
Mani et al. Design of a novel shape signature by farthest point angle for object recognition
CN108491888B (en) Environmental monitoring hyperspectral data spectrum section selection method based on morphological analysis
CN115359249B (en) Palm image ROI region extraction method and system
Park et al. Image retrieval technique using rearranged freeman chain code
Guruprasad et al. Multimodal recognition framework: an accurate and powerful Nandinagari handwritten character recognition model
Zhang et al. Sketch-based image retrieval using contour segments
Zhu et al. Rotation-robust math symbol recognition and retrieval using outer contours and image subsampling
Su et al. Skew detection for Chinese handwriting by horizontal stroke histogram
Shivakumara et al. New texture-spatial features for keyword spotting in video images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240229

Address after: 710000, Room D, 7th Floor, Building CD, Building 2, Xinqing Yayuan, 17A Yanta Road, Beilin District, Xi'an City, Shaanxi Province

Patentee after: Xi'an Qinghechuang Intelligent Technology Co.,Ltd.

Guo jiahuodiqu after: Zhong Guo

Address before: 710055 No. 13, Yanta Road, Shaanxi, Xi'an

Patentee before: XIAN University OF ARCHITECTURE AND TECHNOLOG

Guo jiahuodiqu before: Zhong Guo