CN104217221A - Method for detecting calligraphy and paintings based on textural features - Google Patents
Method for detecting calligraphy and paintings based on textural features Download PDFInfo
- Publication number
- CN104217221A CN104217221A CN201410428074.5A CN201410428074A CN104217221A CN 104217221 A CN104217221 A CN 104217221A CN 201410428074 A CN201410428074 A CN 201410428074A CN 104217221 A CN104217221 A CN 104217221A
- Authority
- CN
- China
- Prior art keywords
- calligraphy
- painting
- image
- feature
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010422 painting Methods 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 title claims abstract description 41
- 241000209094 Oryza Species 0.000 claims abstract description 27
- 235000007164 Oryza sativa Nutrition 0.000 claims abstract description 27
- 235000009566 rice Nutrition 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 239000000284 extract Substances 0.000 claims abstract description 5
- 238000003708 edge detection Methods 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 230000006740 morphological transformation Effects 0.000 claims description 3
- 230000002457 bidirectional effect Effects 0.000 claims 1
- 238000005242 forging Methods 0.000 abstract 1
- 238000012545 processing Methods 0.000 description 19
- 238000005516 engineering process Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 15
- 230000003287 optical effect Effects 0.000 description 13
- 238000000879 optical micrograph Methods 0.000 description 12
- 238000001000 micrograph Methods 0.000 description 11
- 238000005260 corrosion Methods 0.000 description 7
- 230000007797 corrosion Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000009466 transformation Effects 0.000 description 6
- 238000003711 image thresholding Methods 0.000 description 4
- 230000001629 suppression Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000003628 erosive effect Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000000877 morphologic effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000012535 impurity Substances 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241001600609 Equus ferus Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000010894 electron beam technology Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明提供一种基于纹理特征的书画作品的检测方法,包括以下步骤:获取书画作品真迹的宣纸数字图像;对宣纸数字图像进行预处理,得到预处理后的宣纸数字图像;利用SURF算法,提取纹理特征;对待检测书画作品进行纹理特征匹配,如果匹配,则待检测书画作品为真迹;如果不匹配,则待检测书画作品为赝品。本发明提供的基于纹理特征的书画作品的检测方法,用于检测当代书画作品的真伪性,对书画作品真迹进行特征提取,采用双向FLANN算法并结合再将特征点聚类获得每类中心,以中心为范围截取方形区域,以最高的方形区域的相似度来判断整个书画的真伪性,能够快速、准确地辨别书画作品的真伪,且能够防止不法分子伪造书画作品。
The invention provides a method for detecting calligraphy and painting works based on texture features, comprising the following steps: obtaining a digital image of rice paper of authentic works of calligraphy and painting; preprocessing the digital image of rice paper to obtain a digital image of rice paper after preprocessing; using the SURF algorithm to extract Texture features: Match the texture features of the calligraphy and painting works to be detected. If they match, the calligraphy and painting works to be detected are genuine; if they do not match, the calligraphy and painting works to be detected are fakes. The method for detecting calligraphy and painting works based on texture features provided by the present invention is used to detect the authenticity of contemporary calligraphy and painting works, extract features from authentic calligraphy and painting works, and use the two-way FLANN algorithm combined with clustering of feature points to obtain the center of each category. Taking the center as the range to intercept the square area, judging the authenticity of the entire calligraphy and painting with the highest similarity of the square area, it can quickly and accurately identify the authenticity of the calligraphy and painting works, and can prevent criminals from forging calligraphy and painting works.
Description
技术领域technical field
本发明涉及一种书画作品检测方法,特别地涉及一种基于纹理特征的书画作品检测方法,用于检测当代书画作品的真伪性。The invention relates to a method for detecting calligraphy and painting works, in particular to a method for detecting calligraphy and painting works based on texture features, which is used to detect the authenticity of contemporary calligraphy and painting works.
背景技术Background technique
书画作品在中国艺术作品中从古自今一直占据着一席之地,据相关资料显示,书画作品在中国艺术品拍卖市场上的所占的份额更是达到了百分之七八十。Calligraphy and painting works have always occupied a place in Chinese art works since ancient times. According to relevant data, calligraphy and painting works account for 70% to 80% of the Chinese art auction market.
近年来,随着社会经济的发展和人民生活水平质量的不断提高,越来越多人人们老师追求书画艺术。市面上就开始不断涌现各种名家书画,但是对于真假书画的鉴别人们还知之甚少。In recent years, with the development of the social economy and the continuous improvement of the quality of people's living standards, more and more people and teachers are pursuing the art of calligraphy and painting. A variety of famous calligraphy and paintings began to emerge on the market, but people still know little about the identification of true and false calligraphy and paintings.
科技的发展使印刷技术也得到飞跃的进步,就给不法分子提供了谋利的机会,他们能够轻而易举地利用高科技印刷设备克隆名家书画作品,比手工克隆更加容易,数量也更多,因此危害也就更大。四川通庆文化鉴定评估有限公司执行董事潘保清曾说过,现中国艺术品拍卖市场有点像一匹脱缰的野马!赝品、伪作已经充斥着市场,想买到真正的正品是难上加难。The development of science and technology has also made great progress in printing technology, which provides opportunities for criminals to make profits. They can easily use high-tech printing equipment to clone famous calligraphy and painting works, which is easier and more numerous than manual cloning. Also bigger. Pan Baoqing, executive director of Sichuan Tongqing Cultural Appraisal and Evaluation Co., Ltd., once said that the current Chinese art auction market is a bit like a runaway wild horse! Fakes and counterfeit works have flooded the market, and it is even more difficult to buy real genuine products.
据相关的资料显示,中国书画的赝品充斥于艺术品拍卖市场,而且越是知名的作家书画作品的赝品率也更高。而那些小的艺术品拍卖行,其中推出的个别专场竟然很难见到一幅真迹。因此,目前市场字画等名贵收藏品的真伪鉴定技术的滞后是阻碍和制约其在文化创意市场流通与交易的主要矛盾之一。According to relevant data, fakes of Chinese paintings and calligraphy are flooding the art auction market, and the more well-known writers, the higher the rate of fakes. And those small art auction houses, it is difficult to see an authentic work in individual special sales. Therefore, the lagging of authenticity identification technology for valuable collections such as calligraphy and paintings in the market is one of the main contradictions hindering and restricting their circulation and trading in the cultural and creative market.
现有的防伪技术主要包括红外防伪技术、DNA防伪技术以及指纹识别技术。Existing anti-counterfeiting technologies mainly include infrared anti-counterfeiting technology, DNA anti-counterfeiting technology and fingerprint recognition technology.
红外防伪技术是较早的防伪技术。它是使用红外线技术制作成人体肉眼可见或不可见的文字、图形、条码等等,再利用特殊的红外墨粉、油墨、印油印制到书画作品上。这样,在检验时可用特殊仪器检测出即可。Infrared anti-counterfeiting technology is an earlier anti-counterfeiting technology. It uses infrared technology to make text, graphics, barcodes, etc. that are visible or invisible to the naked eye, and then uses special infrared toner, ink, and ink to print them on calligraphy and painting works. In this way, it can be detected by special instruments during inspection.
DNA防伪技术与生物知识相关,它是在作品上移植特殊的DNA片断,使得物品也具备了和DNA的特性,由于一种物品对应一组基因,而每对基因都是独一无二的,这样物品就具有了唯一的特征,从而大大降级检验难度。DNA anti-counterfeiting technology is related to biological knowledge. It is to transplant special DNA fragments on the works, so that the items also have the characteristics of DNA. Since an item corresponds to a set of genes, and each pair of genes is unique, the item will be With unique features, the difficulty of inspection is greatly reduced.
指纹识别技术,当书画家完成自己的作品后,就用手指蘸着印泥在印章下方加盖自己的指纹印模一枚。这样就可以通过指纹识别进行书画鉴定。Fingerprint identification technology, when a calligrapher and painter completes his work, he dips his finger in the inkpad and stamps his own fingerprint impression under the seal. In this way, calligraphy and painting identification can be carried out through fingerprint identification.
但是上述方法均是人为的加入书画以外材料来辅助判断书画的真伪性,仍然存在不法分子利用高科技伪造书画的可能性。But above-mentioned method all is to artificially add materials other than painting and calligraphy to assist the authenticity of judging painting and calligraphy, still there is the possibility that criminals utilize high technology to forge painting and calligraphy.
本领域技术人员致力于提供一种能够快速准确辨别书画的真伪性且不给不法份子钻空子的机会的防伪方法及其系统,从而能有效确保拍卖行书画都是真品。Those skilled in the art are committed to providing an anti-counterfeiting method and system that can quickly and accurately identify the authenticity of paintings and calligraphy and does not give criminals the opportunity to take advantage of loopholes, thereby effectively ensuring that paintings and calligraphy in auction houses are authentic.
发明内容Contents of the invention
本发明的目的在于提供一种基于纹理特征的书画作品的检测方法,用于检测当代书画作品的真伪性,当作者完成书画作品后,对书画作品真迹进行特征提取,在对待检测书画作品的检测过程中,利用从书画作品真迹中提取的特征,采用双向FLANN算法并结合再将特征点聚类获得每类中心,以中心为范围截取方形区域,以最高的方形区域的相似度来判断整个书画的真伪性,能够快速、准确地辨别书画作品的真伪。The purpose of the present invention is to provide a method for detecting calligraphy and painting works based on texture features, which is used to detect the authenticity of contemporary calligraphy and painting works. In the detection process, using the features extracted from the authentic works of calligraphy and painting, using the two-way FLANN algorithm and combining the feature points to cluster to obtain the center of each type, intercepting the square area with the center as the range, and judging the entire square area with the highest similarity The authenticity of calligraphy and painting can quickly and accurately identify the authenticity of calligraphy and painting works.
本发明提供一种基于纹理特征的书画作品的检测方法,包括以下步骤:The invention provides a method for detecting calligraphy and painting works based on texture features, comprising the following steps:
(1)获取书画作品真迹的宣纸数字图像;(1) Obtain digital images of Xuan paper of authentic works of calligraphy and painting;
(2)对宣纸数字图像进行预处理,得到预处理后的宣纸数字图像;(2) carry out preprocessing to the rice paper digital image, obtain the rice paper digital image after preprocessing;
(3)利用SURF算法,从预处理后的宣纸数字图像中提取纹理特征;(3) Utilize the SURF algorithm to extract texture features from the preprocessed rice paper digital image;
(4)对待检测书画作品进行纹理特征匹配,如果匹配,则待检测书画作品为书画作品真迹;如果不匹配,则待检测书画作品为赝品。(4) Match the texture features of the painting and calligraphy works to be detected. If they match, the painting and calligraphy works to be detected are authentic works of calligraphy and painting; if they do not match, the painting and calligraphy works to be detected are fakes.
进一步地,步骤(4)中采用双向Flann匹配算法对待检测书画作品进行纹理特征匹配。Further, in step (4), a two-way Flann matching algorithm is used to perform texture feature matching on the calligraphy and painting works to be detected.
进一步地,步骤(4)中进行纹理特征匹配之前还包括以下步骤:Further, before performing texture feature matching in step (4), the following steps are also included:
(41)利用改进K-Means算法进行聚类,获得每类的中心;(41) utilize improved K-Means algorithm to carry out clustering, obtain the center of each class;
(42)通过步骤(41)中获得的类的中心,提取书画作品真迹与待检测书画作品的相同区域。(42) Through the center of the class obtained in step (41), extract the same area of the authentic painting and calligraphy work as that of the painting and calligraphy work to be detected.
进一步地,步骤(2)对宣纸数字图像进行预处理采用彩色图转灰度图、直方图均衡化、形态学变换、边缘检测或Gabor过滤中的一种或几种的组合。Further, step (2) preprocesses the rice paper digital image using one or a combination of color image conversion to grayscale image, histogram equalization, morphological transformation, edge detection or Gabor filtering.
进一步地,步骤(3)利用SURF算法,从预处理后的宣纸数字图像中提取纹理特征,包括以下步骤:Further, step (3) utilizes the SURF algorithm to extract texture features from the preprocessed rice paper digital image, comprising the following steps:
(31)利用积分图像计算Haar特征;(31) Utilize integral image to calculate Haar feature;
(32)构造Hessian矩阵,获得极值点;(32) Construct a Hessian matrix to obtain extreme points;
(33)采用高斯滤波找出了点(x,y)是否为局部极值点的判断依据;(33) Gaussian filtering is used to find out the basis for judging whether the point (x, y) is a local extremum point;
(34)提取特征点;(34) feature points are extracted;
(35)获取特征点特征方向;(35) Acquiring the characteristic direction of the characteristic point;
(36)构造SURF特征点描述算子。(36) Construct a SURF feature point description operator.
进一步地,步骤(35)获取特征方向包括以下步骤:Further, step (35) obtaining feature direction comprises the following steps:
(351)以特征点为圆心,以特征点所在的尺度的六倍为半径,建立圆区域;(351) Take the feature point as the center of the circle, and take six times the scale of the feature point as the radius to establish a circle area;
(352)以60度扇形区域与设定的间隔扫描圆区域;(352) scanning the circular area with a 60-degree fan-shaped area and a set interval;
(353)harr小波响应的最大值为特征点所在的尺度的四倍,按照距离圆心的远近赋予权重,距离圆心越远,权重越小;(353) The maximum value of the harr wavelet response is four times the scale of the feature point, and the weight is given according to the distance from the center of the circle. The farther away from the center of the circle, the smaller the weight;
(354)将扇形区域内每个点的Harr小波响应矢量相加;(354) adding the Harr wavelet response vectors of each point in the fan-shaped area;
(355)找到Harr小波响应矢量和最大的扇形区域,扇形区域的矢量方向为特征点的特征方向。(355) Find the Harr wavelet response vector and the largest fan-shaped area, and the vector direction of the fan-shaped area is the characteristic direction of the feature point.
进一步地,步骤(34)提取特征点的方法包括以下步骤:Further, the method for extracting feature points in step (34) comprises the following steps:
(341)将局部极值点与位于同一尺度中相邻的8个点以及上下两个相邻尺度的18个点进行比较,如果局部极值点为极值点,则局部极值点为初步极值点;(341) Compare the local extremum point with 8 adjacent points in the same scale and 18 points in the upper and lower two adjacent scales. If the local extremum point is an extremum point, then the local extremum point is preliminary Extreme point;
(342)在获取了全部初步特征点之后,用三维线性差值方法获取亚像素级别的特征点,然后去掉小于设定阈值的点,得到最终的特征点。(342) After obtaining all the preliminary feature points, use the three-dimensional linear difference method to obtain sub-pixel-level feature points, and then remove the points smaller than the set threshold to obtain the final feature points.
与现有技术相比,本发明提供的基于纹理特征的书画作品的检测方法具有以下有益效果:Compared with the prior art, the method for detecting calligraphy and painting works based on texture features provided by the present invention has the following beneficial effects:
(1)当作者完成书画作品后,对书画作品真迹进行特征提取,在对待检测书画作品的检测过程中,利用从书画作品真迹中提取的特征,采用双向FLANN算法并结合再将特征点聚类获得每类中心,以中心为范围截取方形区域,以最高的方形区域的相似度来判断整个书画的真伪性,能够快速、准确地辨别当代书画作品的真伪;(1) After the author completes the calligraphy and painting works, feature extraction is performed on the authentic works of calligraphy and painting works. In the detection process of the calligraphy and painting works to be detected, the features extracted from the authentic works of calligraphy and painting works are used, and the feature points are clustered by using the two-way FLANN algorithm and combining them. Obtain each type of center, intercept the square area with the center as the range, judge the authenticity of the entire calligraphy and painting with the highest similarity of the square area, and can quickly and accurately identify the authenticity of contemporary calligraphy and painting works;
(2)不需要人为加入书画以外材料,来辅助判断书画作品的真伪性,由于书画作品真迹的宣纸纹理特征几乎无法复制,因而能够防止不法分子利用高科技伪造书画作品。(2) There is no need to artificially add materials other than calligraphy and painting to assist in judging the authenticity of calligraphy and painting works. Since the rice paper texture characteristics of authentic calligraphy and painting works can hardly be copied, it can prevent criminals from using high technology to forge calligraphy and painting works.
附图说明Description of drawings
图1是本发明的一个实施例的流程图;Fig. 1 is a flowchart of an embodiment of the present invention;
图2是直方图均衡化前的基于光学显微镜图像;Figure 2 is an optical microscope-based image before histogram equalization;
图3是直方图均衡化后的基于光学显微镜图像;Figure 3 is an optical microscope-based image after histogram equalization;
图4是直方图均衡化前的基于电子显微镜图像;Figure 4 is an electron microscope-based image before histogram equalization;
图5是直方图均衡化后的基于电子显微镜图像;Figure 5 is an electron microscope-based image after histogram equalization;
图6是腐蚀运算前的图像;Figure 6 is the image before corrosion operation;
图7是腐蚀运算的腐蚀结构;Figure 7 is the corrosion structure of the corrosion operation;
图8是腐蚀运算后的图像;Fig. 8 is the image after corrosion operation;
图9是图3所示的基于光学显微镜图像进行图像阈值化后的图像;Fig. 9 is the image after image thresholding based on the optical microscope image shown in Fig. 3;
图10是图9所示的基于光学显微镜图像进行开运算后的图像;Fig. 10 is the image after performing the opening operation based on the optical microscope image shown in Fig. 9;
图11是图5所示的基于电子显微镜图像进行图像阈值化后的图像;Fig. 11 is the image after image thresholding based on the electron microscope image shown in Fig. 5;
图12是图11所示的基于电子显微镜图像进行开运算后的图像;Fig. 12 is the image after performing the opening operation based on the electron microscope image shown in Fig. 11;
图13是图9所示的基于光学显微镜图像进行拉普拉斯边缘检测后的图像;Fig. 13 is the image after carrying out Laplacian edge detection based on the optical microscope image shown in Fig. 9;
图14是图11所示的基于电子显微镜图像进行拉普拉斯边缘检测后的图像;Fig. 14 is the image after performing Laplacian edge detection based on the electron microscope image shown in Fig. 11;
图15是图9所示的基于光学显微镜图像进行canny边缘检测后的图像;Fig. 15 is the image after carrying out canny edge detection based on the optical microscope image shown in Fig. 9;
图16是图11所示的基于电子显微镜图像进行canny边缘检测后的图像;Fig. 16 is the image after canny edge detection based on the electron microscope image shown in Fig. 11;
图17是图9所示的基于光学显微镜图像进行gabor滤波后的图像;Fig. 17 is the image after carrying out gabor filter based on optical microscope image shown in Fig. 9;
图18是图11所示的基于电子显微镜图像进行gabor滤波后的图像;Fig. 18 is the image after carrying out gabor filtering based on the electron microscope image shown in Fig. 11;
图19是Haar特征类别图;Figure 19 is a Haar feature category diagram;
图20是积分说明图;Fig. 20 is an integral explanatory diagram;
图21是高斯滤波的y方向模板;Figure 21 is the y-direction template of Gaussian filtering;
图22是高斯滤波的二阶混合偏导模板;Figure 22 is the second-order mixed partial derivative template of Gaussian filtering;
图23是尺度金字塔;Figure 23 is a scale pyramid;
图24是尺度金字塔;Figure 24 is a scale pyramid;
图25是局部极值点在尺度金字塔中相邻点的位置关系图;Fig. 25 is a positional relationship diagram of local extremum points in adjacent points in the scale pyramid;
图26a是特征方向示意图;Figure 26a is a schematic diagram of characteristic directions;
图26b是特征方向示意图;Figure 26b is a schematic diagram of characteristic directions;
图26c是特征方向示意图;Figure 26c is a schematic diagram of characteristic directions;
图27是算子方向示意图;Figure 27 is a schematic diagram of the operator direction;
图28是无旋转组预处理后的匹配结果;Figure 28 is the matching result after preprocessing without rotation group;
图29是有旋转组预处理后的匹配结果;Figure 29 is the matching result after preprocessing with rotation group;
图30是将纹理特征点聚四类后的效果;Figure 30 is the effect of clustering texture feature points into four categories;
图31是将纹理特征点聚四类并提取区域结果。Figure 31 is the result of clustering texture feature points into four categories and extracting regions.
具体实施方式Detailed ways
以下是本发明的具体实施例并结合附图,对本发明的技术方案作进一步的描述,但本发明并不限于以下实施例。The following are specific embodiments of the present invention and the technical solution of the present invention is further described in conjunction with the accompanying drawings, but the present invention is not limited to the following embodiments.
如图1所示,本发明的一个实施例的基于纹理特征的书画作品的检测方法,包括以下步骤:As shown in Figure 1, the detection method of the calligraphy and painting work based on texture feature of an embodiment of the present invention, comprises the following steps:
(1)获取书画作品真品的宣纸数字图像;(1) Obtain the digital image of rice paper of authentic calligraphy and painting works;
(2)对宣纸数字图像进行预处理,得到预处理后的宣纸数字图像;(2) carry out preprocessing to the rice paper digital image, obtain the rice paper digital image after preprocessing;
(3)利用SURF算法,从预处理后的宣纸数字图像中提取纹理特征;(3) Utilize the SURF algorithm to extract texture features from the preprocessed rice paper digital image;
(4)对待检测书画作品进行纹理特征匹配,如果匹配,则待检测书画作品为真品;如果不匹配,则待检测书画作品为仿品。(4) Match the texture features of the calligraphy and painting works to be detected. If they match, the calligraphy and painting works to be detected are authentic; if they do not match, the calligraphy and painting works to be detected are counterfeit.
步骤(1)中获取书画作品真品的宣纸数字图像,可以采用光学显微镜或电子显微镜获取。In step (1), the digital image of the rice paper of the authentic calligraphy and painting work can be obtained by using an optical microscope or an electron microscope.
光学显微镜就是利用光学原理,将人眼无法辨别的细微的物体放大为人眼可见的图像,再通过一系列处理,将图像转为数字图像。由于宣纸是由木屑、纤维、泥石等制作,其光学图像含有许多杂质,而且对比度低,层次感差,因此纹理特征不明显。An optical microscope uses the principle of optics to magnify tiny objects that cannot be discerned by the human eye into an image visible to the human eye, and then convert the image into a digital image through a series of processing. Because rice paper is made of sawdust, fiber, mudstone, etc., its optical image contains many impurities, and the contrast is low, and the sense of layering is poor, so the texture features are not obvious.
电子显微镜就是通过电子与物质的相互作用产生信号的电子光学装置。与光学显微镜相比,用电磁透镜代替了光学透镜,将我们肉眼不可见的电子束成像到荧光屏上。因此,其图像为灰度图,且纹理特征明显,层次感高,景深高。Electron microscopes are electron-optical devices that generate signals through the interaction of electrons with matter. Compared with optical microscopes, electromagnetic lenses are used instead of optical lenses to image electron beams invisible to our naked eyes onto fluorescent screens. Therefore, the image is a grayscale image with obvious texture features, high layering, and high depth of field.
本实施例中,采用电子显微镜获取书画作品真品的宣纸数字图像。In this embodiment, an electron microscope is used to obtain digital images of rice paper of genuine calligraphy and painting works.
步骤(2)对宣纸数字图像进行预处理,目的在于消除图像中不重要的信息,使真实的有用的重要的信息得到加强,进而增强需要的信息的可检测性,使图像数据得到最大限度的简化,并使特征提取变得简单。Step (2) preprocesses the digital image of rice paper, the purpose is to eliminate the unimportant information in the image, strengthen the real and useful important information, and then enhance the detectability of the required information, so that the image data can be maximized. Simplify, and make feature extraction easy.
预处理可以采用空间域处理或频域处理。Preprocessing can use spatial domain processing or frequency domain processing.
空间域处理包括彩色图转灰度图、直方图均衡化、形态学变换以及边缘检测等方法。Spatial domain processing includes methods such as color image conversion to grayscale image, histogram equalization, morphological transformation, and edge detection.
频域处理包括Gabor变换。Frequency domain processing includes Gabor transform.
对宣纸数字图像进行预处理,可以采用上述方法中的一种或多种。One or more of the above-mentioned methods can be used for preprocessing the digital image of rice paper.
本实施例中的基于纹理特征的书画作品的检测方法,预处理采用了直方图均衡化与Gabor变换。In the method for detecting calligraphy and painting works based on texture features in this embodiment, histogram equalization and Gabor transformation are used for preprocessing.
彩色图转灰度图用于将通过光学显微镜的获得的书画作品真品的彩色宣纸数字图像转为灰度图。The color image to grayscale image is used to convert the color rice paper digital image of the authentic calligraphy and painting works obtained through an optical microscope into a grayscale image.
国际照明委员会(CIE)选择红色(波长700.00nm)、绿色(波长546.1nm)、蓝色(波长438.8nm)这三种单色作为表示最基本的三种颜色,任何颜色都可以由这三种颜色表示而成,这就是RGB颜色表示系统。The International Commission on Illumination (CIE) chooses red (wavelength 700.00nm), green (wavelength 546.1nm), blue (wavelength 438.8nm) as the three most basic colors, and any color can be represented by these three This is the RGB color representation system.
由于人眼对绿色通道最为敏感,因而绿色通道包含的纹理信息最多,且与原图相比,绿色通道的对比度也有所提高,噪声比原图要少。因此,本发明将光学显微镜的彩色图像中的颜色分离出绿色通道,用绿色通道的值来表示灰度图像的灰度值。Since the human eye is most sensitive to the green channel, the green channel contains the most texture information, and compared with the original image, the contrast of the green channel is also improved, and the noise is less than the original image. Therefore, the present invention separates the color in the color image of the optical microscope into the green channel, and uses the value of the green channel to represent the gray value of the gray image.
直方图均衡化是图像处理领域中常用的处理方法,用于改善背景和前景都太亮或都太暗的图像质量。原理就是利用图像的直方图对图像整体的对比度做出调整。可以理解为把原始图像的灰度直方图从比较聚集的一些灰度区间变为在整个灰度范围内的均匀分布,使得一定灰度范围内的像素数量大概一样。Histogram equalization is a commonly used processing method in the field of image processing, which is used to improve the image quality in which both the background and the foreground are too bright or too dark. The principle is to use the histogram of the image to adjust the overall contrast of the image. It can be understood as changing the grayscale histogram of the original image from some relatively aggregated grayscale intervals to a uniform distribution in the entire grayscale range, so that the number of pixels in a certain grayscale range is about the same.
图像中灰度为i的像素出现的概率是:The probability of a pixel with grayscale i appearing in the image is:
其中ni表示灰度图像中灰度i出现的次数,L表示图像中的最高的像素灰度,n表示图像中所有的像素数。Among them, n i represents the number of occurrences of gray level i in the grayscale image, L represents the highest pixel gray level in the image, and n represents the number of all pixels in the image.
图2、图3为基于光学显微镜图像的直方图均衡化前后的图像,图4、图5为基于电子显微镜图像的直方图均衡化前后的图像,从中可以看出无论是基于光学显微镜图像还是基于电子显微镜图像,直方图均衡化都能使纹理特征更加明显。Figure 2 and Figure 3 are images before and after histogram equalization based on optical microscope images, and Figure 4 and Figure 5 are images before and after histogram equalization based on electron microscope images, from which it can be seen that whether based on optical microscope images or based on Electron microscope images and histogram equalization can make texture features more obvious.
数学形态学是一门建立在集论基础上的学科思想,在图像处理中有着重要的地位,因此也是对几何形态学分析与描述的有力算法。数学形态学的主要研究图像形态特征,分析处理图像的基本特征和结构,也就是分析处理图像中元素与元素、图像部分与部分间的关系。形态学图像处理一般使用图像邻域运算,运用邻域结构元素,在每个像素的邻域结构元素域进行一定的逻辑运算,其结果就是最后的图像结果。腐蚀和膨胀是最基本常见的形态运算,主要用于减少图像的噪声与噪点。Mathematical morphology is a subject thought based on set theory, which plays an important role in image processing, so it is also a powerful algorithm for geometric morphology analysis and description. Mathematical morphology mainly studies the morphological characteristics of images, analyzes and processes the basic characteristics and structures of images, that is, analyzes and processes the relationship between elements and elements in images, and image parts and parts. Morphological image processing generally uses image neighborhood operations, uses neighborhood structural elements, and performs certain logical operations in the neighborhood structural element domains of each pixel, and the result is the final image result. Erosion and dilation are the most basic and common morphological operations, mainly used to reduce image noise and noise.
设F(x)为结构元素,对于空间中M中的每一个像素点x,腐蚀变换可以定义为:Let F(x) be the structural element, for each pixel point x in M in the space, the erosion transformation can be defined as:
X={x:F(X)∈M}X={x:F(X)∈M}
图6是待处理的图像M(一般为二值图像,针对的是黑点),图7是结构元素F(x),设F(x)的中心点为右下角的黑点,则腐蚀即是将F(x)的中心点和M上的点依次进行比较,若F(x)上的黑色的点都是在M的范围内,则该点就保留下来,不在范围内就去掉;图8是腐蚀后的结果,从中可以看出,腐蚀后的结果仍在本来M范围内,但是比M包含的点要少,就像M被腐蚀了一层。Figure 6 is the image M to be processed (generally a binary image, aimed at black dots), and Figure 7 is the structural element F(x), if the center point of F(x) is the black dot in the lower right corner, then the corrosion is It is to compare the center point of F(x) with the points on M in turn, if the black points on F(x) are all within the range of M, then the point will be kept, and if it is not within the range, it will be removed; 8 is the result after corrosion, from which it can be seen that the result after corrosion is still within the original range of M, but contains fewer points than M, as if M has been corroded.
设F(x)为结构元素,对于空间中M中的每一个点x,膨胀变换可以定义为:Let F(x) be the structural element, for each point x in M in the space, the expansion transformation can be defined as:
开运算的步骤是将一幅原始图像先进行腐蚀运算,然后进行膨胀运算。取合适阈值后,对图像进行阈值二值化,得到一个二值图像。再进行开运算,即可得到结果。The step of opening operation is to first perform erosion operation on an original image, and then perform expansion operation. After taking an appropriate threshold, the image is thresholded and binarized to obtain a binary image. Then perform the open operation to get the result.
图9是图3所示的基于光学显微镜图像进行图像阈值化后的图像;图10是图9所示的基于光学显微镜图像进行开运算后的图像;图11是图5所示的基于电子显微镜图像进行图像阈值化后的图像;图12是图11所示的基于电子显微镜图像进行开运算后的图像。Fig. 9 is the image after image thresholding based on the optical microscope image shown in Fig. 3; Fig. 10 is the image after the opening operation based on the optical microscope image shown in Fig. 9; Fig. 11 is the image based on the electron microscope shown in Fig. 5 The image after image thresholding; FIG. 12 is the image after opening operation based on the electron microscope image shown in FIG. 11 .
由于图像本身存在很多噪点,对开运算造成了很大的影响,使得结果并不理想。Because the image itself has a lot of noise, it has a great impact on the opening operation, making the result not ideal.
边缘检测是图像处理的重要方法。由于边缘是图像某个部分颜色改变最快的区域,大多处在目标和目标、目标和背景以及区域和区域之间,同时是进行图像处理的重要基础。Edge detection is an important method of image processing. Since the edge is the area where the color of a certain part of the image changes the fastest, it is mostly between the target and the target, the target and the background, and the area and the area, and it is an important basis for image processing.
图像的边缘有的两个重要的属性特征是方向和幅度,边缘方向是垂直与边缘像素最为剧烈的方向,而边缘幅度则是灰度像素变换的程度。一般来说,边缘水平方向的像素变化平缓,而垂直于边缘方向的像素则变化比较剧烈。边缘上的这种变化可以用微分算子检测出来,一般用一阶或二阶导数来检测边缘。The two important attributes of the edge of the image are direction and magnitude. The edge direction is the most severe direction perpendicular to the edge pixels, and the edge magnitude is the degree of grayscale pixel transformation. Generally speaking, the pixels in the horizontal direction of the edge change gently, while the pixels in the direction perpendicular to the edge change sharply. This change on the edge can be detected using a differential operator, typically using first or second derivatives to detect edges.
边缘检测包括拉普拉斯边缘检测与canny边缘检测。Edge detection includes Laplacian edge detection and canny edge detection.
拉普拉斯边缘检测是采用二维函数f(x,y)的拉普拉斯变换是一个二阶的微分,定义为:Laplace edge detection is a two-dimensional function f(x,y) whose Laplace transform is a second-order differential, defined as:
其中:in:
这样,对于一个3*3窗口,可得:In this way, for a 3*3 window, we can get:
f(x,y)=f(x+1,y)+f(x-1,y)+f(x,y+1)+f(x,y-1)-4f(x,y)f(x,y)=f(x+1,y)+f(x-1,y)+f(x,y+1)+f(x,y-1)-4f(x,y)
利用上述公式进行卷积运算即可。The convolution operation can be performed using the above formula.
图13是图9所示的基于光学显微镜图像进行拉普拉斯边缘检测后的图像;图14是图11所示的基于电子显微镜图像进行拉普拉斯边缘检测后的图像;从图中可以看出基于电子显微镜的图像的处理结果好于基于光学显微镜,但仍存在很多噪点,效果仍然不够理想。Fig. 13 is the image after carrying out Laplacian edge detection based on optical microscope image shown in Fig. 9; Fig. 14 is the image after carrying out Laplacian edge detection based on electron microscope image shown in Fig. 11; From the figure It can be seen that the processing result of the image based on the electron microscope is better than that based on the optical microscope, but there are still many noises, and the effect is still not ideal.
canny边缘检测包括以下步骤:Canny edge detection includes the following steps:
a)去噪声a) Denoising
用二维高斯滤波G(x,y)来进行平滑图像,减少噪声对图像的影响。它可以等效为竖直方向G(y)和水平方向G(x)这两个一维高斯滤波器分别进行滤波处理结果的积。平滑后的图像为:Use two-dimensional Gaussian filter G(x,y) to smooth the image and reduce the influence of noise on the image. It can be equivalent to the product of filtering results of two one-dimensional Gaussian filters in the vertical direction G(y) and the horizontal direction G(x). The smoothed image is:
H(x,y)=G(x,y)*I(x,y)=G(y)*(G(x)*I(x,y)H(x,y)=G(x,y)*I(x,y)=G(y)*(G(x)*I(x,y)
其中,I(x,y)代表原图像上的点(x,y),H(x,y)代表滤波后的图像上的点(x,y)。Among them, I(x,y) represents the point (x,y) on the original image, and H(x,y) represents the point (x,y) on the filtered image.
b)寻找图像中的亮度梯度b) Find the brightness gradient in the image
用一阶偏导的有限差分计算竖直方向导数和G(y)水平方向导数G(x),一般可以采用Sobel或Prewitt算子。然后,利用方向导数来计算梯度幅值 The vertical derivative and the horizontal derivative G(x) of G(y) are calculated by the finite difference of the first-order partial derivative, generally Sobel or Prewitt operator can be used. Then, use the directional derivative to calculate the gradient magnitude
c)非极大值抑制c) Non-maximum suppression
对各像素点的梯度值进行非极大值抑制运算来精确各个边缘点的位置。比如,对当前像素点的3×3邻域来讲,假如该点的梯度幅值大于沿梯度方向相邻两个像素点的梯度幅值,则认为该点是待选边缘点,将其对应的点标记为1。否则,则认为该点为非边缘点,将其标记0。The non-maximum value suppression operation is performed on the gradient value of each pixel to accurately position each edge point. For example, for the 3×3 neighborhood of the current pixel point, if the gradient magnitude of the point is greater than the gradient magnitude of two adjacent pixel points along the gradient direction, the point is considered to be an edge point to be selected, and its corresponding The point marked as 1. Otherwise, the point is considered as a non-edge point and marked as 0.
d)阈值化d) Thresholding
对非极大值抑制处理后的图像进行双阈值化处理,除虚假边缘,连接断了的边缘。通过人为给定的高阈值和图像的灰度直方图,计算出最终高阈值,然后,通过给定的低阈值和图像灰度直方图,计算最终低阈值。再将经过非极大值抑制后的图像的每个候选边缘点与最终高阈值与进行比较,记下小于最终高阈值的边缘点。对所有的边缘点,在8邻域内寻找大于最终低阈值的点,将其标记为最终的边缘点。Double-thresholding processing is performed on the image after non-maximum suppression processing to remove false edges and connect broken edges. The final high threshold is calculated by the artificially given high threshold and the gray histogram of the image, and then the final low threshold is calculated by the given low threshold and the gray histogram of the image. Then compare each candidate edge point of the image after non-maximum suppression with the final high threshold and record the edge points smaller than the final high threshold. For all edge points, find a point larger than the final low threshold in the 8 neighborhood, and mark it as the final edge point.
图15是图9所示的基于光学显微镜图像进行canny边缘检测后的图像;图16是图11所示的基于电子显微镜图像进行canny边缘检测后的图像;从中可以看出基于电子显微镜的图像的处理结果要好于基于光学显微镜,而且边缘连续清晰,我们肉眼就可以很直观地看出纹理特征,其次处理后图像和原图像也无较大差别。Fig. 15 is the image after the canny edge detection based on the optical microscope image shown in Fig. 9; Fig. 16 is the image after the canny edge detection is carried out based on the electron microscope image shown in Fig. 11; it can be seen that the image based on the electron microscope The processing result is better than that based on the optical microscope, and the edges are continuous and clear. We can see the texture features intuitively with the naked eye. Secondly, there is no big difference between the processed image and the original image.
频域处理中的Gabor变换是一种加窗傅里叶变换。The Gabor transform in frequency domain processing is a windowed Fourier transform.
傅里叶变换就是将信号分解成不同的频率分量,对于任何信号都可以分解成复正弦信号之和。Gabor函数可以在不同尺度、不同方向上提取需要的相关特征,而且人类视觉系统在频率和方向上的表示和Gabor滤波器的频率和方向的表示非常接近,所以在纹理处理和识别上通常会有较好的效果。The Fourier transform is to decompose the signal into different frequency components, and any signal can be decomposed into the sum of complex sinusoidal signals. The Gabor function can extract the required relevant features in different scales and directions, and the expression of the human visual system in terms of frequency and direction is very close to the expression of the frequency and direction of the Gabor filter, so there are usually differences in texture processing and recognition. better effect.
二维Gabor函数有两种表示形式。Two-dimensional Gabor functions have two representations.
第一种形式是:The first form is:
原式:Original formula:
实部:Real:
虚部:Imaginary part:
其中:in:
x′=xcosθ+ysinθx'=xcosθ+ysinθ
y′=-xsinθ+ycosθy'=-xsinθ+ycosθ
表示相位偏移,表示正弦函数的波长,θ表示核函数方向,σ表示高斯标准差,γ表示x,y两个方向的纵横比。 Indicates the phase shift, Indicates the wavelength of the sine function, θ indicates the direction of the kernel function, σ indicates the Gaussian standard deviation, and γ indicates the aspect ratio of the x and y directions.
第二种形式是:The second form is:
其中:in:
v的值表示Gabor滤波的波长,u的值表示Gabor核函数方向,K表示总的方向数,参数σ/k表示高斯窗口的大小,这里取 The value of v indicates the wavelength of the Gabor filter, the value of u indicates the direction of the Gabor kernel function, K indicates the total number of directions, and the parameter σ/k indicates the size of the Gaussian window, which is taken here
本文使用的形式为第二种。将滤波函数和图像做卷积运算即可获得处理图像。The form used in this paper is the second one. The processed image can be obtained by performing convolution operation on the filter function and the image.
由于Gabor函数具有很强的方向性,所以我们选取0°、30°、60°、90°、120°以及150°六个方向的Gabor变换结果图,如图17、图18所示,图17是图9所示的基于光学显微镜图像进行gabor滤波后的图像,图18是图11所示的基于电子显微镜图像进行gabor滤波后的图像。Since the Gabor function has a strong directionality, we select the Gabor transformation result graphs in six directions of 0°, 30°, 60°, 90°, 120° and 150°, as shown in Figure 17 and Figure 18, and Figure 17 9 is an image after performing Gabor filtering based on the optical microscope image shown in FIG. 9 , and FIG. 18 is an image after performing Gabor filtering based on the electron microscope image shown in FIG. 11 .
Gabor滤波的结果图进行阈值化处理,得到二值化图像,为滤过一些图像杂质,在合并六个方向的图像结果时,定义如下筛选条件:The result image of Gabor filtering is thresholded to obtain a binary image. In order to filter some image impurities, when merging image results in six directions, define the following filter conditions:
设fi(x,y)是核函数方向为30*i的Gabro变换图像在点(x,y)的灰度值,该值只能是0或255。Let f i (x, y) be the gray value of the Gabro transformed image at point (x, y) with the direction of the kernel function being 30*i, and this value can only be 0 or 255.
F(x,y)是合并的图像在点(x,y)的灰度值。则F(x, y) is the gray value of the merged image at point (x, y). but
如果那么F(x,y)=0,即该点(x,y)是黑色。if Then F(x,y)=0, that is, the point (x,y) is black.
如果那么F(x,y)=255,即该点(x,y)是白色。if Then F(x,y)=255, that is, the point (x,y) is white.
从图17、图18中可以看出gabor滤波器处理不管是对于电子显微镜还是对于光学显微镜都有不错的效果,但是会产生一些伪纹理和噪点,特别对于基于光学显微镜的图像。It can be seen from Figure 17 and Figure 18 that the gabor filter processing has a good effect on both the electron microscope and the optical microscope, but it will produce some pseudo textures and noises, especially for images based on the optical microscope.
gabor滤波的实现代码如下:The implementation code of gabor filtering is as follows:
最早的Haar特征由Papageorgiou C.等提出(《A general framework for objectdetection》),后来Paul Viola和Michal Jones提出利用积分图像法快速计算Haar特征的方法(《Rapid object detection using a boosted cascade of simplefeatures》)。之后,Rainer Lienhart和Jochen Maydt用对角特征对Haar特征库进行了扩展(《An extended set of Haar-like features for rapid object detection》)。The earliest Haar feature was proposed by Papageorgiou C. et al. ("A general framework for object detection"), and later Paul Viola and Michal Jones proposed a method for quickly calculating Haar features using the integral image method ("Rapid object detection using a boosted cascade of simple features") . Later, Rainer Lienhart and Jochen Maydt extended the Haar feature library with diagonal features ("An extended set of Haar-like features for rapid object detection").
如图19所示,Haar特征包括边缘特征、线性特征、中心特征和对角线特征,这些组合成特征模板。特征模板内有白色和黑色两种矩形,并定义该模板的特征值为白色矩形像素和减去黑色矩形像素和。As shown in Figure 19, Haar features include edge features, linear features, center features, and diagonal features, which are combined into feature templates. There are two kinds of rectangles, white and black, in the feature template, and the feature value of the template is defined as the sum of the pixels of the white rectangle minus the sum of the pixels of the black rectangle.
本实施例中选取边缘特征前两个作为haarx和haary值,进行纹理特征提取。In this embodiment, the first two edge features are selected as haarx and haary values for texture feature extraction.
也可以选取其他Haar特征或Haar特征组合,进行纹理特征提取。Other Haar features or combinations of Haar features can also be selected for texture feature extraction.
由于Haar特征是矩形中黑色区域所有像素值的和减去白色区域所有像素值的和。图像中特征的个数远远大于其像素个数,如果计算每个特征的像素和,计算量会非常大,而且许多运算是重复的。Since the Haar feature is the sum of all pixel values in the black area in the rectangle minus the sum of all pixel values in the white area. The number of features in an image is far greater than the number of pixels. If the pixel sum of each feature is calculated, the amount of calculation will be very large, and many operations are repeated.
步骤(3)利用SURF算法,从预处理后的宣纸数字图像中提取纹理特征,包括以下步骤:Step (3) utilizes the SURF algorithm to extract texture features from the preprocessed rice paper digital image, comprising the following steps:
a)利用积分图像计算Haar特征a) Calculate the Haar feature using the integral image
Paul Viola提出一种利用积分图像法快速计算Haar特征的方法,就是先构造一张“积分图”(Integral image),也叫Summed Area Table,之后任何一个Haar矩形特征都可以通过查表的方法(Look Up Table)和有限次简单运算得到,大大减少了运算次数。Paul Viola proposed a method to quickly calculate Haar features using the integral image method, which is to construct an "integral image" (Integral image), also called Summed Area Table, and then any Haar rectangular feature can be obtained through the table lookup method ( Look Up Table) and a limited number of simple operations, greatly reducing the number of operations.
积分图像,是指当前像素点所在位置距原点所成矩形区域内的所有灰度之和,这里所说的原点是指图像的左上角的像素点。The integral image refers to the sum of all the gray levels in the rectangular area formed by the position of the current pixel point and the origin. The origin here refers to the pixel point in the upper left corner of the image.
图像I(x,y)(0≤x≤M,0≤y≤N)的积分图像可用公式表示为:The integral image of image I(x,y)(0≤x≤M, 0≤y≤N) can be expressed as:
用图像的方法则可以理解为黑色的部分为当前像素点,灰色为积分区域,如图20所示。Using the image method, it can be understood that the black part is the current pixel point, and the gray part is the integral area, as shown in Figure 20.
计算图像中任意一块矩形区域的灰度之和S(x)只需要利用矩形4个顶点的积分值即可获得:To calculate the sum S(x) of the gray levels of any rectangular area in the image, you only need to use the integral values of the four vertices of the rectangle to obtain:
S(x)=S(X1,Y1)+S(X2,Y2)-S(X2,Y1)-S(X1,Y2)S(x)=S(X 1 ,Y 1 )+S(X 2 ,Y 2 )-S(X 2 ,Y 1 )-S(X 1 ,Y 2 )
其中S(X1,Y1)、S(X2,Y2)、S(X2,Y1)与S(X1,Y2)分别为矩形4个顶点的积分值。Among them, S(X 1 ,Y 1 ), S(X 2 ,Y 2 ), S(X 2 ,Y 1 ) and S(X 1 ,Y 2 ) are the integral values of the four vertices of the rectangle respectively.
b)构造Hessian矩阵,获得极值点b) Construct a Hessian matrix to obtain extreme points
Hessian矩阵是整个Surf算法的最重要部分,假设函数f(x,y)是二元连续可微函数,则Hessian矩阵为:The Hessian matrix is the most important part of the entire Surf algorithm. Assuming that the function f(x, y) is a binary continuous differentiable function, the Hessian matrix is:
那么每个像素点都会有个Hessian矩阵,对矩阵求判别式为:Then each pixel will have a Hessian matrix, and the discriminant for the matrix is:
如果detH>0,函数f(x,y)在(x,y)点取得极值点。If detH>0, the function f(x, y) obtains the extremum point at (x, y).
c)采用高斯滤波找出了点(x,y)是否为局部极值点的判断依据c) Gaussian filtering is used to find out whether the point (x, y) is the basis for judging whether the point is a local extremum point
由于特征点需具备尺度无关性,所以在构造Hessian矩阵之前,需要对图像进行高斯滤波处理,形成在不同尺度下的表示。则给予高斯滤波的图像函数为:Since the feature points need to be scale-independent, before constructing the Hessian matrix, it is necessary to perform Gaussian filtering on the image to form representations at different scales. Then the image function given Gaussian filter is:
L((x,y),t)=G(t)·I(x,y)L((x,y),t)=G(t)·I(x,y)
L((x,y),t)是图像在不同尺度下的表示,将高斯核G(t)与图像函数I(x,y)在点(x,y)的进行卷积运算,其中高斯核G(t)为:L((x,y), t) is the representation of the image at different scales, and the Gaussian kernel G(t) is convolved with the image function I(x,y) at the point (x,y), where Gaussian The kernel G(t) is:
其中g(x)表示高斯函数,t表示高斯方差。使用这种方法就可以计算出为图像中每个像素的H行列式的决定值,然后利用得出的值来判别特征点。Herbert Bay后来提出可以用近似值代替L(x,t)来方便应用。为了减少正确的值和近似值的误差将H矩阵判别式表示为:Where g(x) represents the Gaussian function and t represents the Gaussian variance. Using this method, it is possible to calculate the decision value of the H determinant of each pixel in the image, and then use the obtained value to distinguish the feature point. Herbert Bay later proposed that an approximate value can be used instead of L(x, t) to facilitate the application. In order to reduce the error of the correct value and the approximate value, the H matrix discriminant is expressed as:
图21是高斯滤波后得到的图,y轴方向上求二阶导数模板,又为了运算速度地加快使用了近似处理,其处理后图就如右边边的图所示,这样就简化了许多。并且右图可以采用积分图来运算,这样就更加加快了速度。同样,x和y方向二阶混合偏导模板则如图22所示:Figure 21 is the image obtained after Gaussian filtering. The second derivative template is calculated in the y-axis direction, and approximate processing is used to speed up the operation. The image after processing is shown in the right image, which is much simplified. And the right image can be calculated using the integral image, which speeds up the speed even more. Similarly, the second-order mixed partial guide template in the x and y directions is shown in Figure 22:
这样就找出了点(x,y)是否为局部极值点的判断依据。对于不同维度的卷积过程可以用一个金字塔模型来表示,金字塔的每一层代表图像的一个尺度,通过高斯核和原始图像的卷积可以表示不同尺度下的表示。Surf算法对于每一层均保持原始图像不变而只改变滤波器的大小。In this way, the basis for judging whether the point (x, y) is a local extremum point is found out. The convolution process of different dimensions can be represented by a pyramid model. Each layer of the pyramid represents a scale of the image, and the convolution of the Gaussian kernel and the original image can represent representations at different scales. The Surf algorithm keeps the original image unchanged for each layer and only changes the size of the filter.
d)提取特征点d) Extract feature points
图23、图24示出了尺度金字塔,在获取图像在不同尺度上的局部极值点之后,如图25所示,将其与金字塔的三维领域中的26个点即同一尺度空间中相邻的8个点和上下两个相邻尺度的18个点进行比较,如果该点是极值点,则保留下来作为初步的特征点。Figure 23 and Figure 24 show the scale pyramid. After obtaining the local extreme points of the image at different scales, as shown in Figure 25, it is adjacent to 26 points in the three-dimensional field of the pyramid, that is, in the same scale space Compare the 8 points with the 18 points of the upper and lower two adjacent scales, and if the point is an extreme point, it will be reserved as a preliminary feature point.
在获取了全部初步特征点之后,用三维线性差值方法获取亚像素级别的特征点,然后去掉小于设定阈值的点,得到最终的特征点。After obtaining all the preliminary feature points, use the three-dimensional linear difference method to obtain sub-pixel-level feature points, and then remove the points smaller than the set threshold to obtain the final feature points.
e)获取特征方向e) Get feature direction
在SURF匹配中,通过计算特征点领域的小波特征来得到特征向量。如果S为特征点所在的尺度,一般以特征点为圆心,6S为半径,建立园区域,计算这些点的harr小波特征。In SURF matching, the feature vector is obtained by calculating the wavelet feature of the feature point field. If S is the scale of the feature point, generally the feature point is the center of the circle, 6S is the radius, and the circle area is established to calculate the harr wavelet characteristics of these points.
在SURF算法中,如图26a、图26b以及图26c所示,以60度扇形区域与设定的间隔扫描整个园区域,harr小波响应的最大值为4s,按照距离圆心的远近赋予权重,距离圆心越远,权重越小,然后将扇形区域内每个点的Harr小波响应量矢量相加。这样每个扇形就会得到一个矢量值,最终找到矢量值最大的扇形区域,该扇形区域的矢量方向,作为该特征点的特征方向。重复上述方法,得到所有特征点的特征方向。In the SURF algorithm, as shown in Figure 26a, Figure 26b, and Figure 26c, the entire circle area is scanned with a 60-degree fan-shaped area and a set interval. The maximum value of the harr wavelet response is 4s, and the weight is given according to the distance from the center of the circle. The distance The farther the center of the circle is, the smaller the weight is, and then add the Harr wavelet response vectors of each point in the fan-shaped area. In this way, each sector will get a vector value, and finally the sector area with the largest vector value is found, and the vector direction of the sector area is used as the feature direction of the feature point. Repeat the above method to get the feature directions of all feature points.
f)构造SURF特征点描述算子f) Construct SURF feature point description operator
在特征点周围取20s*20s的正方形区域,并以该特征点的特征点方向,将此区域划分成5*5的16个子区域,如图27所示,利用haar特征即可计算出每个特征点的方向。Take a square area of 20s*20s around the feature point, and divide this area into 16 sub-areas of 5*5 according to the direction of the feature point, as shown in Figure 27, use the haar feature to calculate each The direction of the feature point.
计算特征向量的实现代码如下:The implementation code to calculate the eigenvector is as follows:
SiftFeatureDetector detector;//构造函数采用内部默认的SiftFeatureDetector detector; //The constructor uses the internal default
std::vector<KeyPoint>keypoints_1,keypoints_2;//构造2个专门由点组成的点向量用来存储特征点std::vector<KeyPoint>keypoints_1, keypoints_2;//Construct 2 point vectors specially composed of points to store feature points
detector.detect(img_1,keypoints_1);//将img_1图像中检测到的特征点存储起来放在keypoints_1中detector.detect(img_1,keypoints_1);//Store the feature points detected in the img_1 image and put them in keypoints_1
detector.detect(img_2,keypoints_2);//同理detector.detect(img_2,keypoints_2);//similar
//在图像中画出特征点// draw feature points in the image
Mat img_keypoints_1,img_keypoints_2;Mat img_keypoints_1, img_keypoints_2;
drawKeypoints(img_1,keypoints_1,img_keypoints_1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);//在内存中画出特征点drawKeypoints(img_1,keypoints_1,img_keypoints_1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);//Draw feature points in memory
drawKeypoints(img_2,keypoints_2,img_keypoints_2,Scalar::all(-1),DrawMatchesFlags::DEFAULT);drawKeypoints(img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT);
imshow("sift_keypoints_1",img_keypoints_1);//显示特征点imshow("sift_keypoints_1",img_keypoints_1);//display feature points
imshow("sift_keypoints_2",img_keypoints_2);imshow("sift_keypoints_2", img_keypoints_2);
//计算特征向量// Compute eigenvectors
SiftDescriptorExtractor extractor;//定义描述子对象SiftDescriptorExtractor extractor;//Define the description sub-object
Mat descriptors_1,descriptors_2;//存放特征向量的矩阵Mat descriptors_1,descriptors_2;//Matrix to store eigenvectors
extractor.compute(img_1,keypoints_1,descriptors_1);//计算特征向量extractor.compute(img_1,keypoints_1,descriptors_1);//calculate feature vector
extractor.compute(img_2,keypoints_2,descriptors_2);extractor.compute(img_2, keypoints_2, descriptors_2);
步骤(4)对待检测书画作品进行纹理特征匹配,Step (4) performing texture feature matching on the calligraphy and painting works to be detected,
特征点匹配的方法有很多,如K邻近法(K Nearest Neighbors,KNN),穷举法(Brute Force,BF)和快速近似邻近法(Fast Approximate Nearest Neighbors,Flann)等等,由于Flann算法适用于高纬度数据并效率高于之前提到的两种算法,所以本文选用Flann算法。There are many methods for feature point matching, such as K Nearest Neighbors (KNN), Brute Force (BF) and Fast Approximate Nearest Neighbors (Flann), etc., because the Flann algorithm is suitable for High-latitude data is not more efficient than the two algorithms mentioned above, so this paper chooses the Flann algorithm.
然而,Flann匹配算法在目标图像和候选图像之间具有方向性,也就是用目标图像去匹配候选图像和用候选图像去匹配目标图像得到结果并不相同。因此本文采用双向Flann匹配来减少这种误差。However, the Flann matching algorithm has directionality between the target image and the candidate image, that is, matching the candidate image with the target image is not the same as matching the target image with the candidate image. Therefore, this paper uses two-way Flann matching to reduce this error.
设图像M1中一个特征点a1,通过Flann算法找到在图像M2中具有最小距离的初始配对点(a1,a2),然后根据所有的匹配对中的最小距离distmin,设置阈值DIST=h*distmin。如果匹配点的最小距离小于distmin,则将图像M2中的a2作为图像M1的a1候选匹配点,否则剔除a1,并进行图像M1的下一个点的匹配,最后得到图像M1对于M2的匹配点对。Suppose a feature point a 1 in the image M 1 , find the initial pairing point (a 1 , a 2 ) with the minimum distance in the image M 2 through the Flann algorithm, and then set the threshold according to the minimum distance dist min among all matching pairs DIST = h*dist min . If the minimum distance of the matching point is less than dist min , a 2 in the image M 2 is used as a 1 candidate matching point of the image M 1 , otherwise, a 1 is eliminated, and the next point of the image M 1 is matched, and finally the image is obtained Matching point pairs of M 1 for M 2 .
同样的,根据上述方法,得到图像M2对于M1的匹配点对。设图像M1对于M2的某匹配点对为(a1,a2),图像M2中点a2对于M2的某匹配点对为(a3,a2),若a3=a1,则认为匹配成功,否则剔除。这样就得到最终的匹配点。图28、图29是特征点匹配结果图。图28中直线表示将两个匹配点进行连接,因为第一组宣纸图像是只有平移没有旋转的,所以看到的匹配都是平行的直线。而第二组是有旋转的,图29中左右匹配的直线不会平行。Similarly, according to the above method, the matching point pairs of image M 2 to M 1 are obtained. Suppose a matching point pair of image M 1 to M 2 is (a 1 , a 2 ), and a matching point pair of point a 2 in image M 2 to M 2 is (a 3 , a 2 ), if a 3 =a 1 , the match is considered successful, otherwise it is eliminated. In this way, the final matching point is obtained. Figure 28 and Figure 29 are feature point matching results. The straight line in Figure 28 represents the connection of two matching points, because the first group of rice paper images has only translation and no rotation, so the matches seen are all parallel straight lines. The second group is rotated, and the matching straight lines in Figure 29 will not be parallel.
虽然双向Flann匹配使得特征点的匹配的精确性大大提高,但是我们不能确保特征点就是匹配正确的,而且目标图像本身可能包涵没有意义的特征点,也就是噪点,这些点并不是对于我们感兴趣目标的有效描述。Although two-way Flann matching greatly improves the matching accuracy of feature points, we cannot ensure that the feature points are matched correctly, and the target image itself may contain meaningless feature points, that is, noise points, which are not of interest to us. A valid description of the target.
基于上述问题,可以认为单单是特征点的匹配无法确保给书画图像匹配提供一个准确的输出结果。这里我们考虑采利用特征点聚类来提取相同区域来解决这问题。Based on the above problems, it can be considered that the matching of feature points alone cannot ensure an accurate output result for the matching of calligraphy and painting images. Here we consider to use feature point clustering to extract the same area to solve this problem.
在聚类算法中,K-Means是一种极具代表性且最为常用的算法,它是把n个对象分为k个簇,使得所有簇内的个对象相似度之和最高,这里相似度用簇内的点到中心距离的均值来表示。算法开始随机抽取K个对象,假设每个对象都是一个簇的中心,对所有对象判断其与k个中心距离,将其加入距离最近的簇,然后更新每个簇的均值和中心,重复上述步骤直至满足各个簇内的均方误差总和最小。In the clustering algorithm, K-Means is a very representative and most commonly used algorithm. It divides n objects into k clusters, so that the sum of the similarities of objects in all clusters is the highest. Here, the similarity It is represented by the mean distance from the points in the cluster to the center. The algorithm starts to randomly select K objects, assuming that each object is the center of a cluster, judge the distance between all objects and k centers, add them to the nearest cluster, and then update the mean and center of each cluster, repeat the above Step until the sum of the mean square errors in each cluster is minimized.
但是原始的K-Means算法在簇之间差别较大的情况下才能取得比较好的聚类效果,并且对于孤立点很敏感,容易受到干扰。所以本实施例采用的是带权重的K均值方法(K-WMeans)。整个计算过程如下:However, the original K-Means algorithm can achieve a better clustering effect only when the differences between clusters are large, and it is sensitive to outliers and is easily disturbed. Therefore, this embodiment adopts the weighted K-means method (K-WMeans). The whole calculation process is as follows:
①选取任意K个点,作为原始的簇中心,对于某个簇Sj,如果xi为其中心,定义d(xi,xk)为xi到任意另外点xk的距离,则可以定义某点的权重值为:① Select any K points as the original cluster center. For a certain cluster S j , if x i is its center, define d( xi , x k ) as the distance from x i to any other point x k , then you can Define the weight value of a point as:
其中
②对于簇中的点,计算其加权平均值,并将这些点重新赋予权值最接近的族,对于某个簇Sj,权值计算公式为:② For the points in the cluster, calculate its weighted average, and re-assign these points to the cluster with the closest weight. For a certain cluster S j , the weight calculation formula is:
③按照上诉公式计算心簇的权值③ Calculate the weight of the heart cluster according to the appeal formula
④重复执行步骤2和3,直至每个脆的样本稳定,并且满足各个簇内的均方误差总和最小。④ Repeat steps 2 and 3 until each brittle sample is stable and the sum of mean square errors within each cluster is minimized.
本实施例中的基于纹理特征的书画作品的检测方法,将特征点聚四类,聚类后的效果如图30所示。In the method for detecting calligraphy and painting works based on texture features in this embodiment, the feature points are clustered into four categories, and the clustering effect is shown in FIG. 30 .
聚类后提取每个簇的中心点,以点为中心,提取固定大小的正方形区域。其中目标图像的区域要小于候选图像,这是为了特征点匹配误差导致的聚类中心存在偏移误差,聚四类并以类中心截取部分区域效果如图31所示。After clustering, the center point of each cluster is extracted, and a square area of fixed size is extracted with the point as the center. The area of the target image is smaller than the candidate image. This is because there is an offset error in the cluster center caused by the feature point matching error. The effect of clustering four categories and intercepting part of the area with the cluster center is shown in Figure 31.
此时,我们将候选图像提取出的目标区域,结合聚类后的中心点进行仿射变换,将旋转和平移的误差进行纠正。再将候选图像进行膨胀操作,这也是为了减少误差。设目标区域图像大小为M*M,候选区域图像大小为N*N,N>M,在候选区域以点(i,j)为中心点域、截取与目标区域相同的方形区域为Hi,j,其中0≤i≤N-M,0≤j≤N-M,则Hi,j和目标区域的相似度S(i,j)为:At this point, we combine the target area extracted from the candidate image with the center point after clustering to perform affine transformation, and correct the errors of rotation and translation. Then the candidate image is expanded, which is also to reduce the error. Let the image size of the target area be M*M, the image size of the candidate area be N*N, N>M, take the point (i, j) as the center point field in the candidate area, and intercept the same square area as the target area as H i, j , where 0≤i≤NM, 0≤j≤NM, then the similarity S(i,j) between H i,j and the target area is:
其中LM(n,m)和Hi,j(n,m)分别表示目标区域和截取区域的点(n,m)的灰度值。我们定义区域的相似率S为:Among them, L M (n, m) and H i, j (n, m) represent the gray value of the point (n, m) of the target area and the intercepted area, respectively. We define the similarity rate S of the region as:
最后我们选取所有区域中相似度最高的为两幅图像的相似度。Finally, we select the highest similarity among all regions as the similarity of the two images.
第一组书画作品的相似度为:The similarity of the first group of calligraphy and painting works is:
匹配率1match rate 1
匹配率0.995885Match rate 0.995885
匹配率0.970803Matching rate 0.970803
匹配率1match rate 1
第二组书画作品的相似度为:The similarity of the second group of calligraphy and painting works is:
匹配率0.728111Matching rate 0.728111
匹配率0.928571Match rate 0.928571
匹配率0.871508Matching rate 0.871508
匹配率0.769504Match rate 0.769504
第一组数据是第一组图像中以每个类中西提取出区域的相似度,其中最大匹配率为1,那么就可以认为两幅书画是一样的,那么这幅书画就是真迹。第二组数据是第二组图像中以每个类中提取出区域的相似度,由于存在旋转以及预处理时的误差,导致不是每个区域的匹配率不是特别高,但是还是有一个区域的匹配率达到92.8%,所以也可以认为这幅书画是真迹。The first set of data is the similarity of the region extracted from each class in the first set of images, where the maximum matching rate is 1, then it can be considered that the two paintings and calligraphy are the same, then this painting and calligraphy is authentic. The second set of data is the similarity of the regions extracted from each class in the second set of images. Due to the existence of rotation and preprocessing errors, the matching rate of not every region is not particularly high, but there is still a region The matching rate reached 92.8%, so it can also be considered that this painting and calligraphy is authentic.
区域匹配的实现代码如下:The implementation code for region matching is as follows:
IplImage*pRightImage11=cvCloneImage(my_rio1);IplImage*pRightImage11 = cvCloneImage(my_rio1);
cvSetImageROI(pRightImage11,Rect(maxx,maxy,rect_wide,rect_wide));cvSetImageROI(pRightImage11,Rect(maxx,maxy,rect_wide,rect_wide));
cvCopy(pRightImage11,matched,0);cvCopy(pRightImage11, matched, 0);
IplImage*matchmodel=andoperate(matched,my_rio);IplImage*matchmodel = andoperate(matched, my_rio);
cout<<"匹配率"<<maxvalue<<endl;cout<<"matching rate"<<maxvalue<<endl;
本发明提供的基于纹理特征的书画作品的检测方法,(1)只需对书画作品真迹进行一次特征提取,在对待检测书画作品的检测过程中,利用从书画作品真迹中提取的特征,采用双向FLANN算法并结合再将特征点聚类获得每类中心,以中心为范围截取方形区域,以最高的方形区域的相似度来判断整个书画的真伪性,能够快速、准确地辨别书画作品的真伪;不需要人为加入书画以外材料,来辅助判断书画作品的真伪性,由于书画作品真迹的宣纸纹理特征几乎无法复制,因而能够防止不法分子利用高科技伪造书画作品。The method for detecting calligraphy and painting works based on texture features provided by the present invention, (1) only needs to perform feature extraction once on the authentic calligraphy and painting works, and in the detection process of the calligraphy and painting works to be detected, the features extracted from the authentic calligraphy and painting works are used. Combined with the FLANN algorithm, the feature points are clustered to obtain the center of each category, and the square area is intercepted with the center as the range, and the authenticity of the entire calligraphy and painting is judged by the highest similarity of the square area, which can quickly and accurately identify the authenticity of calligraphy and painting works False; there is no need to artificially add materials other than calligraphy and painting to assist in judging the authenticity of calligraphy and painting works. Since the rice paper texture characteristics of authentic calligraphy and painting works can hardly be copied, it can prevent criminals from using high technology to forge calligraphy and painting works.
以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术人员无需创造性劳动就可以根据本发明的构思做出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。The preferred specific embodiments of the present invention have been described in detail above. It should be understood that those skilled in the art can make many modifications and changes according to the concept of the present invention without creative effort. Therefore, all technical solutions that can be obtained by those skilled in the art based on the concept of the present invention through logical analysis, reasoning or limited experiments on the basis of the prior art shall be within the scope of protection defined by the claims.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410428074.5A CN104217221A (en) | 2014-08-27 | 2014-08-27 | Method for detecting calligraphy and paintings based on textural features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410428074.5A CN104217221A (en) | 2014-08-27 | 2014-08-27 | Method for detecting calligraphy and paintings based on textural features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104217221A true CN104217221A (en) | 2014-12-17 |
Family
ID=52098684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410428074.5A Pending CN104217221A (en) | 2014-08-27 | 2014-08-27 | Method for detecting calligraphy and paintings based on textural features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104217221A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104636733A (en) * | 2015-02-12 | 2015-05-20 | 湖北华中文化产权交易所有限公司 | Image characteristic-based painting and calligraphy work authenticating method |
CN105243384A (en) * | 2015-09-17 | 2016-01-13 | 上海大学 | Pattern recognition-based cultural relic and artwork uniqueness identification method |
CN105389557A (en) * | 2015-11-10 | 2016-03-09 | 佛山科学技术学院 | Electronic official document classification method based on multi-region features |
CN105426884A (en) * | 2015-11-10 | 2016-03-23 | 佛山科学技术学院 | Fast document type recognition method based on full-sized feature extraction |
CN106340011A (en) * | 2016-08-23 | 2017-01-18 | 天津光电高斯通信工程技术股份有限公司 | Automatic detection and identification method for railway wagon door opening |
CN106991419A (en) * | 2017-03-13 | 2017-07-28 | 特维轮网络科技(杭州)有限公司 | Method for anti-counterfeit based on tire inner wall random grain |
CN107004263A (en) * | 2014-12-31 | 2017-08-01 | 朴相来 | Image analysis method, device and computer readable device |
CN107563427A (en) * | 2016-08-25 | 2018-01-09 | 维纳·肖尔岑 | Method for copyright identification of oil paintings and corresponding use |
CN108734176A (en) * | 2018-05-07 | 2018-11-02 | 南京信息工程大学 | Certificate true-false detection method based on texture |
CN108846681A (en) * | 2018-05-30 | 2018-11-20 | 于东升 | For the method for anti-counterfeit and device of woodwork, anti-fake traceability system |
CN109271839A (en) * | 2018-07-23 | 2019-01-25 | 广东数相智能科技有限公司 | A kind of books defect detection method, system and storage medium |
CN109543757A (en) * | 2018-11-27 | 2019-03-29 | 陕西文投艺术品光谱科技有限公司 | A kind of painting and calligraphy painting style identification method based on spectral imaging technology and atlas analysis |
CN110111387A (en) * | 2019-04-19 | 2019-08-09 | 南京大学 | A kind of pointer gauge positioning and reading algorithm based on dial plate feature |
CN110347855A (en) * | 2019-07-17 | 2019-10-18 | 京东方科技集团股份有限公司 | Paintings recommended method, terminal device, server, computer equipment and medium |
CN110599665A (en) * | 2018-06-13 | 2019-12-20 | 深圳兆日科技股份有限公司 | Paper pattern recognition method and device, computer equipment and storage medium |
CN111002348A (en) * | 2019-12-25 | 2020-04-14 | 深圳前海达闼云端智能科技有限公司 | Robot performance testing method, robot and computer readable storage medium |
CN111242993A (en) * | 2020-01-08 | 2020-06-05 | 暨南大学 | Method for identifying authenticity of article based on substrate texture image and appearance characteristic image |
CN111709363A (en) * | 2020-06-16 | 2020-09-25 | 湘潭大学 | Identification method of authenticity of traditional Chinese painting based on feature recognition of rice paper |
CN112818730A (en) * | 2020-03-05 | 2021-05-18 | 刘惠敏 | Cloud storage type online signature identification system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130080426A1 (en) * | 2011-09-26 | 2013-03-28 | Xue-wen Chen | System and methods of integrating visual features and textual features for image searching |
CN103106265A (en) * | 2013-01-30 | 2013-05-15 | 北京工商大学 | Method and system of classifying similar images |
CN103440668A (en) * | 2013-08-30 | 2013-12-11 | 中国科学院信息工程研究所 | Method and device for tracing online video target |
CN103714349A (en) * | 2014-01-09 | 2014-04-09 | 成都淞幸科技有限责任公司 | Image recognition method based on color and texture features |
-
2014
- 2014-08-27 CN CN201410428074.5A patent/CN104217221A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130080426A1 (en) * | 2011-09-26 | 2013-03-28 | Xue-wen Chen | System and methods of integrating visual features and textual features for image searching |
CN103106265A (en) * | 2013-01-30 | 2013-05-15 | 北京工商大学 | Method and system of classifying similar images |
CN103440668A (en) * | 2013-08-30 | 2013-12-11 | 中国科学院信息工程研究所 | Method and device for tracing online video target |
CN103714349A (en) * | 2014-01-09 | 2014-04-09 | 成都淞幸科技有限责任公司 | Image recognition method based on color and texture features |
Non-Patent Citations (1)
Title |
---|
张万全: "基于区域SURF的图像匹配算法研究", 《中国优秀硕士论文全文数据库》 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107004263A (en) * | 2014-12-31 | 2017-08-01 | 朴相来 | Image analysis method, device and computer readable device |
CN107004263B (en) * | 2014-12-31 | 2021-04-09 | 朴相来 | Image analysis method, device and computer readable device |
CN104636733A (en) * | 2015-02-12 | 2015-05-20 | 湖北华中文化产权交易所有限公司 | Image characteristic-based painting and calligraphy work authenticating method |
CN105243384A (en) * | 2015-09-17 | 2016-01-13 | 上海大学 | Pattern recognition-based cultural relic and artwork uniqueness identification method |
CN105389557A (en) * | 2015-11-10 | 2016-03-09 | 佛山科学技术学院 | Electronic official document classification method based on multi-region features |
CN105426884A (en) * | 2015-11-10 | 2016-03-23 | 佛山科学技术学院 | Fast document type recognition method based on full-sized feature extraction |
CN106340011A (en) * | 2016-08-23 | 2017-01-18 | 天津光电高斯通信工程技术股份有限公司 | Automatic detection and identification method for railway wagon door opening |
CN106340011B (en) * | 2016-08-23 | 2019-01-18 | 天津光电高斯通信工程技术股份有限公司 | A kind of automatic detection recognition method that lorry door is opened |
CN107563427A (en) * | 2016-08-25 | 2018-01-09 | 维纳·肖尔岑 | Method for copyright identification of oil paintings and corresponding use |
CN106991419A (en) * | 2017-03-13 | 2017-07-28 | 特维轮网络科技(杭州)有限公司 | Method for anti-counterfeit based on tire inner wall random grain |
CN108734176A (en) * | 2018-05-07 | 2018-11-02 | 南京信息工程大学 | Certificate true-false detection method based on texture |
CN108734176B (en) * | 2018-05-07 | 2021-11-12 | 南京信息工程大学 | Certificate authenticity detection method based on texture |
CN108846681A (en) * | 2018-05-30 | 2018-11-20 | 于东升 | For the method for anti-counterfeit and device of woodwork, anti-fake traceability system |
CN110599665A (en) * | 2018-06-13 | 2019-12-20 | 深圳兆日科技股份有限公司 | Paper pattern recognition method and device, computer equipment and storage medium |
CN109271839A (en) * | 2018-07-23 | 2019-01-25 | 广东数相智能科技有限公司 | A kind of books defect detection method, system and storage medium |
CN109271839B (en) * | 2018-07-23 | 2022-11-01 | 广东数相智能科技有限公司 | Book defect detection method, system and storage medium |
CN109543757A (en) * | 2018-11-27 | 2019-03-29 | 陕西文投艺术品光谱科技有限公司 | A kind of painting and calligraphy painting style identification method based on spectral imaging technology and atlas analysis |
CN110111387B (en) * | 2019-04-19 | 2021-07-27 | 南京大学 | A kind of pointer watch positioning and reading method based on dial features |
CN110111387A (en) * | 2019-04-19 | 2019-08-09 | 南京大学 | A kind of pointer gauge positioning and reading algorithm based on dial plate feature |
CN110347855A (en) * | 2019-07-17 | 2019-10-18 | 京东方科技集团股份有限公司 | Paintings recommended method, terminal device, server, computer equipment and medium |
US11341735B2 (en) | 2019-07-17 | 2022-05-24 | Boe Technology Group Co., Ltd. | Image recommendation method, client, server, computer system and medium |
CN111002348A (en) * | 2019-12-25 | 2020-04-14 | 深圳前海达闼云端智能科技有限公司 | Robot performance testing method, robot and computer readable storage medium |
CN111242993A (en) * | 2020-01-08 | 2020-06-05 | 暨南大学 | Method for identifying authenticity of article based on substrate texture image and appearance characteristic image |
CN111242993B (en) * | 2020-01-08 | 2022-04-26 | 暨南大学 | Authenticity Identification Method of Items Based on Substrate Texture Image and Appearance Feature Image |
CN112818730A (en) * | 2020-03-05 | 2021-05-18 | 刘惠敏 | Cloud storage type online signature identification system |
CN111709363A (en) * | 2020-06-16 | 2020-09-25 | 湘潭大学 | Identification method of authenticity of traditional Chinese painting based on feature recognition of rice paper |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104217221A (en) | Method for detecting calligraphy and paintings based on textural features | |
CN110717896B (en) | Plate strip steel surface defect detection method based on significance tag information propagation model | |
CN104751142B (en) | A kind of natural scene Method for text detection based on stroke feature | |
CN109919960B (en) | Image continuous edge detection method based on multi-scale Gabor filter | |
Nurhaida et al. | Performance comparison analysis features extraction methods for batik recognition | |
CN107563377A (en) | It is a kind of to detect localization method using the certificate key area of edge and character area | |
CN110119741A (en) | A kind of card card image information recognition methods having powerful connections | |
CN109426814A (en) | A kind of positioning of the specific plate of invoice picture, recognition methods, system, equipment | |
JP2014153820A (en) | Character segmentation device and character segmentation method | |
CN107862267A (en) | Face recognition features' extraction algorithm based on full symmetric local weber description | |
CN110298376A (en) | A kind of bank money image classification method based on improvement B-CNN | |
CN105809173B (en) | A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform | |
CN103955950B (en) | Image tracking method utilizing key point feature matching | |
Akhtar et al. | Optical character recognition (OCR) using partial least square (PLS) based feature reduction: an application to artificial intelligence for biometric identification | |
Rizvi et al. | Optical character recognition based intelligent database management system for examination process control | |
CN113011426A (en) | Method and device for identifying certificate | |
Ran et al. | Sketch-guided spatial adaptive normalization and high-level feature constraints based GAN image synthesis for steel strip defect detection data augmentation | |
Bo et al. | Fingerprint singular point detection algorithm by poincaré index | |
JP5201184B2 (en) | Image processing apparatus and program | |
CN104268550A (en) | Feature extraction method and device | |
CN115100719A (en) | Face recognition method based on fusion of Gabor binary pattern and three-dimensional gradient histogram features | |
Chakraborty et al. | Review of various image processing techniques for currency note authentication | |
CN106446920A (en) | Stroke width transformation method based on gradient amplitude constraint | |
CN112070684B (en) | Method for repairing characters of a bone inscription based on morphological prior features | |
Narasimhamurthy et al. | A Copy-Move Image Forgery Detection Using Modified SURF Features and AKAZE Detector. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141217 |
|
RJ01 | Rejection of invention patent application after publication |