CN104834890A - Method for extracting expression information of characters in calligraphy work - Google Patents

Method for extracting expression information of characters in calligraphy work Download PDF

Info

Publication number
CN104834890A
CN104834890A CN201510080291.4A CN201510080291A CN104834890A CN 104834890 A CN104834890 A CN 104834890A CN 201510080291 A CN201510080291 A CN 201510080291A CN 104834890 A CN104834890 A CN 104834890A
Authority
CN
China
Prior art keywords
image
processed
calligraphic
template
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510080291.4A
Other languages
Chinese (zh)
Other versions
CN104834890B (en
Inventor
郑霞
许鹏飞
于忠华
李守江
肖云
张远
郭军
章勇勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NORTHWEST UNIVERSITY
Zhejiang University ZJU
Original Assignee
NORTHWEST UNIVERSITY
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NORTHWEST UNIVERSITY, Zhejiang University ZJU filed Critical NORTHWEST UNIVERSITY
Priority to CN201510080291.4A priority Critical patent/CN104834890B/en
Publication of CN104834890A publication Critical patent/CN104834890A/en
Application granted granted Critical
Publication of CN104834890B publication Critical patent/CN104834890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本发明公开了一种对书法作品中文字神采信息的提取方法,属于图像处理领域。本发明通过将待处理书法作品中的色彩空间进行转换,之后根据第一通道阈值对转换后的待处理书法作品的类型进行区分,根据区分后的是碑图像还是帖图像分别进行处理,以便得到碑图像中文字形质信息的图像和帖图像中文字形质、神采信息以及印章形质、神采信息的图像。避免了现有技术中仅依靠边缘检测导致的文字信息不够完整的缺陷,增加了检测到的文字信息的细节表现,提高了对书法作品研究的便利性。

The invention discloses a method for extracting the information of characters in calligraphy works, which belongs to the field of image processing. The present invention converts the color space in the calligraphic works to be processed, and then distinguishes the type of the converted calligraphic works according to the threshold value of the first channel, and performs processing according to whether the distinguished calligraphic works are stele images or post images, so as to obtain The image of the character quality information in the stele image and the image of the character quality and spirit information in the post image and the image of the seal shape quality and spirit information. It avoids the defect of incomplete text information caused by only relying on edge detection in the prior art, increases the detailed expression of the detected text information, and improves the convenience of researching calligraphy works.

Description

一种对书法作品中文字神采信息的提取方法A method for extracting the information of characters in calligraphy works

技术领域technical field

本发明涉及图像处理领域,特别涉及一种对书法作品中文字神采信息的提取方法。The invention relates to the field of image processing, in particular to a method for extracting the information of characters in calligraphy works.

背景技术Background technique

中国文化源远流长,在漫长的文化长河中,书法可以称得上是长河中的一颗明珠。从古至今,无数书法名家写出了不计其数的艺术瑰宝,为我们留下无数的文化遗产。在对书法作品进行辨别时,需要对书法拥有较高的理解,导致书法辨别需要靠人工进行。Chinese culture has a long history. In the long river of culture, calligraphy can be regarded as a bright pearl in the long river. From ancient times to the present, countless famous calligraphers have written countless artistic treasures and left us countless cultural heritages. When identifying calligraphy works, it is necessary to have a high understanding of calligraphy, which leads to manual identification of calligraphy.

随着技术的进步,现有的基于计算机的图像识别技术已经日趋完善,通过将碑文转换成图像后,对文字的边缘进行处理,将处理后的图像进行投影,根据投影图像对书法作品中的文字进行定位,进而实现文字的提取。With the advancement of technology, the existing computer-based image recognition technology has become more and more perfect. After converting the inscriptions into images, the edges of the characters are processed, and the processed images are projected. The text is positioned, and then the text is extracted.

在实现本发明的过程中,发明人发现现有技术至少存在以下问题:In the process of realizing the present invention, the inventor finds that there are at least the following problems in the prior art:

在根据现有技术进行文字提前的过程中,对文字的检测主要依赖于图像的边缘信息,即主要关注文字的笔划形状等形质信息,没有考虑到颜色、图章等内容的神采信息,这样会导致检测到的文字信息不够完整,给后期的书法作品研究工作带来了不便。In the process of text advance according to the prior art, the detection of text mainly depends on the edge information of the image, that is, it mainly pays attention to the shape and quality information such as the stroke shape of the text, and does not consider the color information of the content such as the color and the stamp. As a result, the detected text information is not complete enough, which brings inconvenience to the research work of calligraphy works in the later period.

发明内容Contents of the invention

为了解决现有技术的问题,本发明提供了一种对书法作品中文字神采信息的提取方法,所述方法包括:In order to solve the problems of the prior art, the present invention provides a method for extracting the information of characters in calligraphy works, said method comprising:

获取待处理书法作品,将所述待处理书法作品的色彩空间转换至预设色彩空间,得到转换后的待处理书法作品;Obtaining the calligraphic works to be processed, converting the color space of the calligraphic works to be processed to a preset color space, and obtaining the converted calligraphic works to be processed;

提取所述转换后的待处理书法作品中的第一通道数值,根据第一阈值对所述转换后的待处理书法作品的类型进行区分;Extracting the first channel value in the converted calligraphic works to be processed, and distinguishing the types of the converted calligraphic works to be processed according to the first threshold;

如果所述转换后的待处理书法作品为碑图像,则通过引导滤波器对所述转换后的待处理书法作品进行处理,得到含有碑图像文字形质信息的图像;If the calligraphic work to be processed after the conversion is a stele image, the calligraphic work to be processed after the conversion is processed through a guide filter to obtain an image containing the shape and quality information of the stele image;

如果所述转换后的待处理书法作品为帖图像,通过所述引导滤波器对所述转换后的待处理书法作品进行处理,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像。If the calligraphic work to be processed after the conversion is a post image, the calligraphic work to be processed after the conversion is processed through the guide filter, and the information containing the character quality and expression in the post image and the shape quality and expression information of the seal are obtained Image.

可选的,所述提取所述转换后的待处理书法作品中的第一通道数值,根据第一阈值对所述待处理书法作品的类型进行区分,包括:Optionally, the extracting the first channel value in the converted calligraphic works to be processed, and distinguishing the types of the calligraphic works to be processed according to a first threshold include:

对所述转换后的待处理书法作品进行处理,确定所述转换后的待处理书法作品中的第一通道数值;Processing the converted calligraphic works to be processed, and determining the first channel value in the converted calligraphic works to be processed;

根据所述第一通道数值,确定用于区分所述转换后的待处理书法作品中前景和背景的第一阈值;According to the first channel value, determine a first threshold for distinguishing the foreground and background in the converted calligraphic work to be processed;

如果所述第一通道数值大于或等于第一阈值,则所述转换后的待处理书法作品为帖图像;If the value of the first channel is greater than or equal to the first threshold, the converted calligraphy work to be processed is a post image;

如果所述第一通道数值小于所述第一阈值,则所述转换后的待处理书法作品为碑图像。If the numerical value of the first channel is smaller than the first threshold, the converted calligraphic work to be processed is a stele image.

可选的,所述如果所述转换后的待处理书法作品为碑图像,则通过引导滤波器对所述碑图像进行处理,得到含有碑图像文字形质信息的图像,包括:Optionally, if the converted calligraphic work to be processed is a stele image, the stele image is processed through a guided filter to obtain an image containing shape and quality information of the stele image, including:

根据第二通道数值对所述转换后的待处理书法作品进行二值化处理,得到二值化的第一模板;Binarize the converted calligraphic works according to the value of the second channel to obtain a first template of binarization;

利用引导滤波器对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板;Using a guide filter to perform denoising processing on the converted calligraphic works to obtain a smooth template after denoising;

根据所述第一模板,结合所述去噪后的平滑模板,通过所述引导滤波器提取所述碑图像中文字的形质信息,得到所述含有碑图像文字形质信息的图像。According to the first template, combined with the denoised smooth template, the shape and quality information of the characters in the stele image is extracted through the guide filter, and the image containing the shape and quality information of the characters in the stele image is obtained.

可选的,所述利用引导滤波器对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板,具体包括:Optionally, the denoising process is performed on the converted calligraphy works to be processed by using a guided filter to obtain a denoised smooth template, which specifically includes:

根据第一公式对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板,所述第一公式为According to the first formula, the calligraphic work to be processed after the conversion is denoised to obtain a smooth template after denoising, and the first formula is:

II oo (( xx ,, ythe y )) == aa kk II gg (( xx ,, ythe y )) ++ bb kk ,, ∀∀ (( xx ,, ythe y )) ∈∈ ωω kk ,,

其中,IO(x,y)是滤波变换后图像中坐标位置为(x,y)处的像素值,ak和bk是线性系数,Ig(x,y)是引导图像中坐标位置为(x,y)处的像素值,ωk是以像素点(x,y)为中心,半径为r的一个局部窗口。Among them, I O (x, y) is the pixel value at the coordinate position (x, y) in the image after filter transformation, a k and b k are linear coefficients, and I g (x, y) is the coordinate position in the guide image is the pixel value at (x, y), and ω k is a local window with the pixel point (x, y) as the center and radius r.

可选的,所述方法还包括:Optionally, the method also includes:

使用第二公式对所述平滑模板进行二次去噪处理,得到二次去噪后的平滑模板,所述第二公式为Using the second formula to perform secondary denoising processing on the smooth template to obtain a smooth template after secondary denoising, the second formula is

σσ nno == ππ 22 11 66 (( WW -- 22 )) (( Hh -- 22 )) ΣΣ xx == 11 WW ΣΣ ythe y == 11 Hh || II (( xx ,, ythe y )) ** NN || ,,

其中,σn是计算得到的噪声的方差,W和H分别表示图像I的宽度和高度,N为掩模算子。Among them, σ n is the variance of the calculated noise, W and H represent the width and height of the image I, respectively, and N is the mask operator.

可选的,如果所述转换后的待处理书法作品为帖图像,通过所述引导滤波器对所述转换后的待处理书法作品进行处理,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像,包括:Optionally, if the converted calligraphic work to be processed is a post image, the converted calligraphic work to be processed is processed through the guide filter to obtain the character quality, expression information and seal shape in the post image. Images with quality, expressive information, including:

根据第二通道数值对所述转换后的待处理书法作品进行二值化处理,得到二值化的第二模板;Performing binarization processing on the converted calligraphy works to be processed according to the value of the second channel to obtain a second template of binarization;

根据所述转换后的待处理书法作品中第三通道数值的反相数值,结合预设的第二阈值,获取所述转换后的待处理书法作品二值化后的第三模板;According to the inverted value of the third channel value in the converted calligraphic work to be processed, combined with the preset second threshold value, the third template after binarization of the converted calligraphic work to be processed is obtained;

结合所述第二模板以及所述第三模板,得到组合模板,结合所述转换后的待处理图像,通过引导滤波器提取所述帖图像中文字和印章的形质信息以及神采信息,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像。Combining the second template and the third template to obtain a combined template, combining the converted image to be processed, extracting the shape and quality information and the expression information of the characters and seals in the post image through a guided filter, and obtaining the image containing The image of the character quality and color information in the post image and the image of the seal shape quality and color information.

可选的,所述结合所述第二模板以及所述第三模板,得到组合模板,包括:Optionally, the combination of the second template and the third template to obtain a combined template includes:

根据第三公式得到组合模板,所述第三公式为According to the third formula, the combination template is obtained, and the third formula is

其中,CS(x,y)是得到的组合模板中坐标为(x,y)处的像素值,C(x,y)是帖图像中文字的第二模板中坐标为(x,y)处的像素值,S(x,y)是帖图像中印章的第三模板中坐标为(x,y)处的像素值。Among them, CS(x, y) is the pixel value at the coordinate (x, y) in the obtained combination template, and C(x, y) is the coordinate at (x, y) in the second template of the text in the post image The pixel value of S(x, y) is the pixel value at coordinates (x, y) in the third template of the seal in the post image.

本发明提供的技术方案带来的有益效果是:The beneficial effects brought by the technical scheme provided by the invention are:

通过将待处理书法作品中的色彩空间进行转换,之后根据第一通道阈值对转换后的待处理书法作品的类型进行区分,根据区分后的是碑图像还是帖图像分别进行处理,以便得到碑图像中文字形质信息的图像和帖图像中文字和印章的形质、神采信息的图像,避免了现有技术中仅依靠边缘检测导致的文字信息不够完整的缺陷,增加了检测到的文字信息的细节表现,提高了对书法作品研究的便利性。By converting the color space in the calligraphic works to be processed, and then distinguishing the types of the converted calligraphic works to be processed according to the threshold value of the first channel, and processing them separately according to whether the distinguished calligraphic works are stele images or post images, so as to obtain stele images The image of Chinese character quality information and the image of the shape quality and spirit information of characters and seals in the post image avoids the defect of incomplete character information caused by only relying on edge detection in the prior art, and increases the details of the detected character information performance, which improves the convenience of researching calligraphy works.

附图说明Description of drawings

为了更清楚地说明本发明的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solution of the present invention more clearly, the accompanying drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. Ordinary technicians can also obtain other drawings based on these drawings on the premise of not paying creative work.

图1是本发明提供的一种对书法作品中文字神采信息的提取方法的流程示意图;Fig. 1 is a kind of schematic flow chart of the method for extracting the character spirit information in the calligraphy works provided by the present invention;

图2是本发明提供的一种对书法作品中文字神采信息的提取方法中部分流程示意图;Fig. 2 is a kind of flow schematic diagram in the extracting method of character spirit information in calligraphy works provided by the present invention;

图3是本发明提供的种对书法作品中文字神采信息的提取方法中部分流程示意图;Fig. 3 is a partial flow diagram in the method for extracting the character expression information in calligraphy works provided by the present invention;

图4是本发明提供的种对书法作品中文字神采信息的提取方法中部分流程示意图;Fig. 4 is a partial flow diagram in the method for extracting the character expression information in calligraphy works provided by the present invention;

图5是本发明提供的碑图像中文字形质信息的提取结果示意图;Fig. 5 is a schematic diagram of the extraction result of the Chinese character quality information of the stele image provided by the present invention;

图6是本发明提供的帖图像中文字形质、神采信息以及印章形质、神采信息的提取结果示意图;Fig. 6 is a schematic diagram of the extraction results of character quality, expression information and seal shape quality, expression information in the post image provided by the present invention;

图7是本发明提供的碑图像中使用不同方法提取的文字的形质信息示意图;Fig. 7 is a schematic diagram of the shape and quality information of characters extracted using different methods in the stele image provided by the present invention;

图8是本发明提供的帖图像中使用不同方法提取的文字和印章的形质和神采信息示意图;Fig. 8 is a schematic diagram of the shape, quality and expression information of characters and seals extracted by different methods in the post image provided by the present invention;

图9是本发明提供的帖图像中使用不同方法提取的文字和印章的形质和神采信息示意图;Fig. 9 is a schematic diagram of the shape, quality and expression information of characters and seals extracted by different methods in the post image provided by the present invention;

图10是本发明提供的由专业人员手工提取的含有文字的真实形质信息的图像的标准示意图。Fig. 10 is a standard schematic diagram of an image containing real shape and quality information of characters manually extracted by professionals provided by the present invention.

具体实施方式Detailed ways

为使本发明的结构和优点更加清楚,下面将结合附图对本发明的结构作进一步地描述。In order to make the structure and advantages of the present invention clearer, the structure of the present invention will be further described below in conjunction with the accompanying drawings.

实施例一Embodiment one

一种对书法作品中文字神采信息的提取方法,如图1所示,所述方法包括:A kind of method for extracting the character expression information in calligraphy works, as shown in Figure 1, described method comprises:

101、获取待处理书法作品,将所述待处理书法作品的色彩空间转换至预设色彩空间,得到转换后的待处理书法作品。101. Acquire a calligraphic work to be processed, convert the color space of the calligraphic work to be processed to a preset color space, and obtain the converted calligraphic work to be processed.

102、提取所述转换后的待处理书法作品中的第一通道数值,根据第一阈值对所述转换后的待处理书法作品的类型进行区分。102. Extract the first channel value in the converted calligraphic work to be processed, and distinguish the type of the converted calligraphic work to be processed according to a first threshold.

103、如果所述转换后的待处理书法作品为碑图像,则通过引导滤波器对所述转换后的待处理书法作品进行处理,得到含有碑图像文字形质信息的图像。103. If the converted calligraphic work to be processed is a stele image, process the converted calligraphic work to be processed through a guided filter to obtain an image containing shape and quality information of the stele image.

104、如果所述转换后的待处理书法作品为帖图像,通过所述引导滤波器对所述转换后的待处理书法作品进行处理,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像。104. If the converted calligraphic work to be processed is a post image, process the converted calligraphic work to be processed through the guide filter to obtain the character quality information in the post image, the expression information and the shape quality of the seal, An image of an informative look.

在实施中,常见的书法作品主要分为碑图像和帖图像,顾名思义,碑图像主要是采集的雕刻在石碑上的文字的图像,通常具有笔锋秀丽、遒劲有力的特点,而帖图像则是采集书写在纸上的文字的图像,往往帖图像上还有印章等部分。由于碑图像和帖图像具有不同的特征,因此在获取书法作品中文字神采信息时,需要根据对象的不同分别处理。In practice, common calligraphy works are mainly divided into stele images and post images. As the name suggests, stele images are mainly images of characters carved on stone steles, which usually have the characteristics of beautiful strokes and strong strength, while post images are collected Images of characters written on paper often include seals and other parts on the images. Since stele images and post images have different characteristics, it is necessary to process them separately according to different objects when obtaining the information of characters in calligraphy works.

其中涉及到的色彩空间的转换,其实是从常用的色彩空间RGB转换至预设色彩空间CIE-Lab,CIE L*a*b*(CIE-LAB)是惯常用来描述人眼可见的所有颜色的最完备的色彩模型。它是为这个特殊目的而由国际照明委员会(CommissionInternationale dEclairage,CIE)提出的。三个基本坐标表示颜色的亮度(L*,L*=0生成黑色而L*=100指示白色),它在红色/品红色和绿色之间的位置(a*负值指示绿色而正值指示品红)和它在黄色和蓝色之间的位置(b*负值指示蓝色而正值指示黄色)。这里之所以使用Lab色彩空间替换原有的RGB色彩空间,是因为Lab色彩空间具有更高的色彩饱和度,有利于图像的后期处理。The conversion of the color space involved is actually the conversion from the commonly used color space RGB to the preset color space CIE-Lab. CIE L*a*b* (CIE-LAB) is commonly used to describe all colors visible to the human eye. The most complete color model for . It was proposed by the International Commission on Illumination (Commission Internationale dEclairage, CIE) for this special purpose. The three basic coordinates represent the lightness of the color (L*, where L*=0 produces black and L*=100 indicates white), its position between red/magenta and green (negative values of a* indicate green and positive values indicate magenta) and its position between yellow and blue (b* negative values indicate blue and positive values indicate yellow). The reason why the Lab color space is used here to replace the original RGB color space is because the Lab color space has higher color saturation, which is beneficial to the post-processing of the image.

正因为如此,本方法在将待处理的书法作品进行色彩空间的转换后,根据CIE-Lab色彩空间中第一通道的数值区分转换后的待处理书法作品是碑图像还是帖图像,在区分完成后通过引导滤波器分别进行文字形质信息图和神采信息图像的提取。Because of this, after the calligraphic work to be processed is converted into the color space, according to the numerical value of the first channel in the CIE-Lab color space, it is distinguished whether the calligraphic work to be processed after conversion is a stele image or a post image, and the distinction is completed Afterwards, the character shape and quality information map and the expression information image are extracted respectively through the guided filter.

上述形质信息主要指的是碑图像中的字体在起承转合处的笔划信息,上述神采信息指的是帖图像中字体的颜色以及印章等信息。The above-mentioned shape and quality information mainly refers to the stroke information of the fonts in the image of the stele at the start and end, and the above-mentioned expression information refers to the color of the font and the seal and other information in the post image.

通过上述方法对待处理书法作品进行处理后,能够对待处理的书法作品进行分类,并且可以根据分类后的具体类型进行后续处理,以便从分类后的书法作品中提取文字性质信息图像和神采信息图像,增加了在提取书法作品艺术信息中的抗噪能力,减少计算复杂度高和细节信息缺失严重的问题,提高了书法作品艺术信息提取的准确性。After the calligraphic works to be processed are processed by the above method, the calligraphic works to be processed can be classified, and subsequent processing can be carried out according to the specific types after classification, so as to extract character information images and spirit information images from the classified calligraphic works, The anti-noise ability in the extraction of artistic information of calligraphy works is increased, the problems of high computational complexity and serious lack of detailed information are reduced, and the accuracy of artistic information extraction of calligraphy works is improved.

可选的,如图2所示,步骤102具体包括:Optionally, as shown in Figure 2, step 102 specifically includes:

201、对所述转换后的待处理书法作品进行处理,确定所述转换后的待处理书法作品中的第一通道数值。201. Process the converted calligraphic work to be processed, and determine a first channel value in the converted calligraphic work to be processed.

在实施中,在转换后的CIE-Lab色彩空间中,存在三个色彩通道,分别为L通道、a通道和b通道,这里的第一通道数值指的是b通道的数值。In implementation, there are three color channels in the converted CIE-Lab color space, which are the L channel, the a channel and the b channel, and the value of the first channel here refers to the value of the b channel.

202、根据所述第一通道数值,确定用于区分所述转换后的待处理书法作品中前景和背景的第一阈值。202. Determine a first threshold for distinguishing foreground and background in the converted calligraphic work to be processed according to the first channel value.

在实施中,使用Otsu(Otsu,该算法发明人姓名)方法对待处理图像的Lab颜色空间中的b通道计算区分图像前景和背景的最优阈值k,这里的Otsu方法的原理是利用阈值将原图像分成前景,背景两个图象。In the implementation, use the Otsu (Otsu, the name of the inventor of the algorithm) method to calculate the optimal threshold k to distinguish the image foreground and background from the b channel in the Lab color space of the image to be processed. The principle of the Otsu method here is to use the threshold to convert the original The image is divided into foreground and background images.

其中:in:

前景:用来表示在当前阈值下的前景的点数,质量矩,平均灰度等信息;Foreground: information such as the number of points, mass moment, and average gray level used to represent the foreground under the current threshold;

后景:用来表示在当前阈值下的背景的点数,质量矩,平均灰度等信息。Background: It is used to represent information such as the number of points, mass moment, and average gray level of the background under the current threshold.

当取最佳阈值时,背景应该与前景具有最大的差值,而在otsu算法中这个衡量差别的标准就是最大类间方差。When the optimal threshold is taken, the background should have the largest difference from the foreground, and in the otsu algorithm, the standard for measuring the difference is the largest inter-class variance.

为了确定用于区分转换后的待处理书法作品中前景和背景的第一阈值,具体实施步骤如下:In order to determine the first threshold for distinguishing the foreground and background in the calligraphic works to be processed after conversion, the specific implementation steps are as follows:

步骤一、使用公式(1)将b通道的像素值从[0,255]归一化到[0,1]:Step 1. Use formula (1) to normalize the pixel value of channel b from [0,255] to [0,1]:

bb ′′ (( xx ,, ythe y )) == bb (( xx ,, ythe y )) 255255 -- -- -- (( 11 ))

其中b(x,y)和b'(x,y)分别表示b通道在坐标为(x,y)处归一化前和归一化后的像素值。Where b(x, y) and b'(x, y) represent the pixel values of channel b before and after normalization at coordinates (x, y), respectively.

步骤二、使用公式(2)计算b’通道的平均像素值;Step 2, using formula (2) to calculate the average pixel value of the b' channel;

uu bb == ΣΣ xx == 11 ,, ythe y == 11 xx == Mm ,, ythe y == NN bb ′′ (( xx ,, ythe y )) Mm ×× NN -- -- -- (( 22 ))

其中,ub表示归一化后的b通道的平均像素值,b'(x,y)表示b通道在坐标为(x,y)处归一化后的像素值,M和N分别表示图像的长度和宽度。Among them, u b represents the average pixel value of the b channel after normalization, b'(x, y) represents the normalized pixel value of the b channel at coordinates (x, y), and M and N represent the image length and width.

这里先假设寻找到的区分前景和背景的最优阈值为k,则通过公式(3)统计归一化后的b通道中像素值大于k的像素占图像的比例wb,1以及归一化后的b通道中像素值小于等于k的像素占图像的比例wb,2,并计算归一化后的b通道中像素值大于k的像素的平均像素值ub,1以及归一化后的b通道中像素值小于等于k的像素的平均像素值ub,2Here, it is assumed that the optimal threshold for distinguishing the foreground and the background is found to be k, and the proportion of pixels with pixel values greater than k in the b channel after statistical normalization through formula (3) to the image is w b,1 and normalized The proportion w b,2 of pixels with pixel values less than or equal to k in the image in the b channel after normalization, and calculate the average pixel value u b,1 of the pixels with pixel values greater than k in the normalized b channel and the normalized The average pixel value u b,2 of the pixels whose pixel values are less than or equal to k in the b channel;

ww bb ,, 11 == WW bb ,, 11 Mm ×× NN

ww bb ,, 22 == WW bb ,, 22 Mm ×× NN

uu bb ,, 11 == ΣiΣi ×× nno (( ii )) WW bb ,, 11 ,, ii >> kk -- -- -- (( 33 ))

uu bb ,, 22 == ΣiΣ i ×× nno (( ii )) WW bb ,, 22 ,, ii ≤≤ kk

其中,Wb,1和Wb,2分别表示归一化后的b通道中像素值大于k的像素数和像素值小于等于k的像素数,i表示图像中像素的像素值,n(i)表示像素值等于i的像素数。Among them, W b,1 and W b,2 respectively represent the number of pixels with pixel values greater than k and the number of pixels with pixel values less than or equal to k in channel b after normalization, i represents the pixel value of pixels in the image, n(i ) represents the number of pixels whose pixel value is equal to i.

步骤三、遍历k的每一种可能的取值,使用公式(4)计算类间差异值Step 3. Traverse every possible value of k, and use the formula (4) to calculate the inter-class difference value

Gb=wb,1×(ub,1-ub)2+wb,2×(ub,2-ub)2   (4)G b =w b,1 ×(u b,1 -u b ) 2 +w b,2 ×(u b,2 -u b ) 2 (4)

其中,Gb表示二值化处理过程中前景和背景部分两类之间的差异值,当Gb达到最大时,即可得到二值化的最佳阈值。Among them, G b represents the difference value between the foreground and background parts in the process of binarization. When G b reaches the maximum value, the optimal threshold of binarization can be obtained.

此时求得的最佳阈值k作为第一阈值即可用于对转换后的待处理书法作品的类型进行区分。The optimal threshold k obtained at this time can be used as the first threshold to distinguish the types of the converted calligraphic works to be processed.

203、如果所述第一阈值大于或等于预设阈值,则所述转换后的待处理书法作品为帖图像。203. If the first threshold is greater than or equal to a preset threshold, the converted calligraphy work to be processed is a post image.

在实施中,例如将预设阈值设置为0.55,则如果第一阈值k大于或等于0.55时,判定该转换后的待处理书法作品为帖图像。In implementation, for example, if the preset threshold is set to 0.55, if the first threshold k is greater than or equal to 0.55, it is determined that the converted calligraphic work to be processed is a post image.

204、如果所述第一阈值小于所述预设阈值,则所述转换后的待处理书法作品为碑图像。204. If the first threshold is smaller than the preset threshold, the converted calligraphic work to be processed is a stele image.

在实施中,例如将预设阈值设置为0.55,则如果第一阈值k小于0.55时,判定该转换后的待处理书法作品为碑图像。In implementation, for example, if the preset threshold is set to 0.55, if the first threshold k is less than 0.55, it is determined that the converted calligraphic work to be processed is a stele image.

通过上述步骤,就可以将转换后的待处理书法作品完成类型上的区分,以便于后续的针对性的对形质信息和神采信息的提取。Through the above steps, the converted calligraphic works to be processed can be differentiated in type, so as to facilitate the subsequent targeted extraction of shape and quality information and spirit information.

可选的,如图3所示,步骤103,包括:Optionally, as shown in Figure 3, step 103 includes:

301、根据第二通道数值对所述转换后的待处理书法作品进行二值化处理,得到二值化的第一模板。301. Perform binarization processing on the converted calligraphic work to be processed according to the value of the second channel, to obtain a first binarized template.

在实施中,该步骤301具体包括:In implementation, this step 301 specifically includes:

步骤一、使用公式(5)计算L通道的平均像素值,其中L通道的平均像素值即为第二通道数值;Step 1, using formula (5) to calculate the average pixel value of the L channel, wherein the average pixel value of the L channel is the second channel value;

uu LL == ΣΣ xx == 11 ,, ythe y == 11 xx == Mm ,, ythe y == NN LL (( xx ,, ythe y )) Mm ×× NN -- -- -- (( 55 ))

其中,uL表示L通道的平均像素值,L(x,y)表示L通道在坐标为(x,y)处像素的像素值,M和N分别表示图像的长度和宽度。Among them, u L represents the average pixel value of the L channel, L(x, y) represents the pixel value of the pixel of the L channel at coordinates (x, y), and M and N represent the length and width of the image, respectively.

步骤二、假设对碑图像的L通道进行二值化处理的最优阈值为T,则通过公式(6)统计L通道中像素值大于T的像素占图像的比例wL,1以及L通道中像素值小于等于T的像素占图像的比例wL,2,并计算L通道中像素值大于T的像素的平均像素值uL,1以及L通道中像素值小于等于T的像素的平均像素值uL,2Step 2. Assuming that the optimal threshold for binarizing the L channel of the stele image is T, then use the formula (6) to count the proportion of pixels in the L channel whose pixel value is greater than T to the image w L, 1 and in the L channel The proportion w L,2 of the pixels whose pixel value is less than or equal to T in the image, and calculate the average pixel value u L,1 of the pixels whose pixel value is greater than T in the L channel and the average pixel value of the pixels whose pixel value is less than or equal to T in the L channel uL ,2 ;

ww LL ,, 11 == WW LL ,, 11 Mm ×× NN

ww LL ,, 22 == WW LL ,, 22 Mm ×× NN

uu LL ,, 11 == ΣiΣ i ×× nno (( ii )) WW LL ,, 11 ,, ii >> TT -- -- -- (( 66 ))

uu LL ,, 22 == ΣiΣ i ×× nno (( ii )) WW LL ,, 22 ,, ii ≤≤ TT

其中,WL,1和WL,2分别表示L通道中像素值大于T的像素数和像素值小于等于T的像素数,i表示图像中像素的像素值,n(i)表示像素值等于i的像素数。Among them, W L,1 and W L,2 respectively represent the number of pixels whose pixel value is greater than T and the number of pixels whose pixel value is less than or equal to T in the L channel, i represents the pixel value of the pixel in the image, and n(i) represents the pixel value equal to The number of pixels for i.

步骤三、遍历T的每一种可能的取值,使用公式(7)计算类间差异值Step 3. Traverse every possible value of T, and use the formula (7) to calculate the inter-class difference value

GL=wL,1×(uL,1-uL)2+wL,2×(uL,2-uL)2   (7)G L =w L,1 ×(u L,1 -u L ) 2 +w L,2 ×(u L,2 -u L ) 2 (7)

其中,GL表示二值化处理过程中目标部分和背景部分两类之间的差异值,当GL达到最大时,即可得到二值化的最佳阈值T,然后再使用下式对L通道进行二值化处理。Among them, GL represents the difference between the target part and the background part in the process of binarization. When GL reaches the maximum value, the optimal threshold T of binarization can be obtained, and then the following formula is used to compare L Channels are binarized.

根据上述步骤,二值化处理后的图像即为第一模板。According to the above steps, the binarized image is the first template.

302、利用引导滤波器对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板。302. Use a guided filter to perform denoising processing on the converted calligraphic works to obtain a denoised smooth template.

在实施中,这里的引导滤波器是根据预设的规则对转换后的待处理书法作品进行的线性变换,该线性变换能够在保持变换前后图像具有最小的差值,从而使得在变换后的图像进行去噪处理时能够尽可能的保持与初始图像的一致性。In implementation, the guide filter here is a linear transformation of the converted calligraphy works to be processed according to the preset rules. This linear transformation can maintain the minimum difference between the images before and after the transformation, so that the transformed image When performing denoising processing, the consistency with the original image can be maintained as much as possible.

该步骤中包含的线性变换以及去噪处理的具体过程如后文所示。The specific process of linear transformation and denoising processing included in this step is shown below.

303、根据所述第一模板,结合所述去噪后的平滑模板,通过所述引导滤波器提取所述碑图像中文字的形质信息,得到所述含有碑图像文字形质信息的图像。303. According to the first template, combined with the denoised smooth template, extract the shape and quality information of the characters in the stele image through the guide filter, and obtain the image containing the shape and quality information of the stele image.

在实施中,在对碑图像进行文字形质信息提取的过程中,使用碑图像中文字的形质模板II作为输入图像,平滑图像EE作为引导图像。该过程就是利用平滑后图像EE(去除了多种噪声)的内容信息对通过二值化得到的形质模板进行滤波处理(去除形质模板中的散点噪声、非文字连通成分以及模糊边缘等信息),以保留文字的形质轮廓,得到文字的形质信息。In the implementation, in the process of extracting the shape and quality information of the characters from the stele image, the shape and quality template II of the characters in the stele image is used as the input image, and the smooth image EE is used as the guide image. This process is to use the content information of the smoothed image EE (remove a variety of noises) to filter the shape-quality template obtained through binarization (remove scattered point noise, non-text connected components, and blurred edges in the shape-quality template) Information) to retain the shape and quality contour of the characters, and obtain the shape and quality information of the characters.

引导滤波器的基本原理是利用引导图像中的信息对输入图像进行滤波处理,以得到需要的信息。因此针对不同的目标,所使用的输入图像和引导图像是不同的。The basic principle of the guided filter is to use the information in the guided image to filter the input image to obtain the required information. Therefore, for different targets, the input images and guide images used are different.

可选的,所述利用引导滤波器对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板,具体包括:Optionally, the denoising process is performed on the converted calligraphy works to be processed by using a guided filter to obtain a denoised smooth template, which specifically includes:

根据第一公式对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板,所述第一公式为According to the first formula, the calligraphic work to be processed after the conversion is denoised to obtain a smooth template after denoising, and the first formula is:

II oo (( xx ,, ythe y )) == aa kk II gg (( xx ,, ythe y )) ++ bb kk ,, ∀∀ (( xx ,, ythe y )) ∈∈ ωω kk ,,

其中,IO(x,y)是滤波变换后图像中坐标位置为(x,y)处的像素值,ak和bk是线性系数,Ig(x,y)是引导图像中坐标位置为(x,y)处的像素值,ωk是以像素点(x,y)为中心,半径为r的一个局部窗口。Among them, I O (x, y) is the pixel value at the coordinate position (x, y) in the image after filter transformation, a k and b k are linear coefficients, and I g (x, y) is the coordinate position in the guide image is the pixel value at (x, y), and ω k is a local window with the pixel point (x, y) as the center and radius r.

在实施中,根据引导图像中的内容,利用引导滤波器对输入图像进行平滑处理。例如:以Ii为输入图像,以Ig为引导图像,那么引导滤波器就是对引导图像的一个线性变换。即:In implementation, the input image is smoothed using a guided filter according to the content in the guided image. For example: take I i as the input image and I g as the guide image, then the guide filter is a linear transformation of the guide image. Right now:

II oo (( xx ,, ythe y )) == aa kk II gg (( xx ,, ythe y )) ++ bb kk ,, ∀∀ (( xx ,, ythe y )) ∈∈ ωω kk

其中,IO(x,y)是滤波变换后图像中坐标位置为(x,y)处的像素值,ak和bk是线性系数,Ig(x,y)是引导图像中坐标位置为(x,y)处的像素值,ωk是以像素点(x,y)为中心,半径为r的一个局部窗口。Among them, I O (x, y) is the pixel value at the coordinate position (x, y) in the image after filter transformation, a k and b k are linear coefficients, and I g (x, y) is the coordinate position in the guide image is the pixel value at (x, y), and ω k is a local window with the pixel point (x, y) as the center and radius r.

为了使输入图像和输出图像之间的差异最小,即需要在窗口ωk中使以下的函数达到最小化:In order to minimize the difference between the input image and the output image, the following function needs to be minimized in the window ω k :

E=Σ((IO(x,y)-Ii(x,y))2+εak 2)=Σ((akIg(x,y)+bk-Ii(x,y))2+εak 2),E=Σ((I O (x,y)-I i (x,y)) 2 +εa k 2 )=Σ((a k I g (x,y)+b k -I i (x,y )) 2 +εa k 2 ),

其中,E是输入图像Ii和输出图像IO之间的差异值,ε是一个防止ak的值过大的正则化参数。当E达到最小时ak和bk分别为:where E is the difference value between the input image I i and the output image I O , and ε is a regularization parameter to prevent the value of a k from being too large. When E reaches the minimum, a k and b k are respectively:

aa kk == (( σσ kk 22 ++ ϵϵ )) -- 11 (( 11 || ωω || ΣΣ (( xx ,, ythe y )) ∈∈ ωω kk II gg (( xx ,, ythe y )) II ii (( xx ,, ythe y )) -- μμ kk II ‾‾ ikik )) ,,

bb kk == II ‾‾ ikik -- aa kk μμ kk

其中,σk 2和μk分别为在局部窗口ωk内Ig(x,y)的均值和方差,为Ii(x,y)在窗口ωk内的均值,|ω|是窗口ωk内像素点的个数。Among them, σ k 2 and μ k are the mean and variance of I g (x, y) in the local window ω k , respectively, is the mean value of I i (x, y) in the window ω k , and |ω| is the number of pixels in the window ω k .

由于一个像素可能被多个局部窗口所覆盖,因此,可以根据计算得到的参数ak和bk,通过以下公式计算得到滤波后的输出图像IO(x,y)。Since a pixel may be covered by multiple local windows, the filtered output image I O (x, y) can be obtained by calculating the following formula according to the calculated parameters a k and b k .

II oo (( xx ,, ythe y )) == aa ‾‾ xyxy II gg (( xx ,, ythe y )) ++ bb ‾‾ xyxy ,,

其中,是覆盖像素(x,y)的所有窗口系数的均值。in, and is the mean of all window coefficients covering pixel (x,y).

此时的输出图像即为进行去噪处理中得到的平滑模板。The output image at this time is the smooth template obtained in the denoising process.

在碑图像的平滑过程中,使用的输入图像和输出图像都是碑图像的第二通道。因此,该过程是根据第二通道即L通道图像的信息对其自身进行滤波处理,(根据图像自身的信息计算出影响滤波效果的关键参数:局部窗口的半径和正则化参数,然后利用这些参数进行其自身进行滤波处理),以得到平滑后的图像。In the smoothing process of the stele image, both the input image and the output image used are the second channel of the stele image. Therefore, this process is to filter itself according to the information of the second channel, that is, the L channel image, (calculate the key parameters that affect the filtering effect according to the information of the image itself: the radius of the local window and the regularization parameter, and then use these parameters Perform its own filtering process) to obtain a smoothed image.

可选的,所述方法还包括:Optionally, the method also includes:

使用第二公式对所述平滑模板进行二次去噪处理,得到二次去噪后的平滑模板,所述第二公式为Using the second formula to perform secondary denoising processing on the smooth template to obtain a smooth template after secondary denoising, the second formula is

σσ nno == ππ 22 11 66 (( WW -- 22 )) (( Hh -- 22 )) ΣΣ xx == 11 WW ΣΣ ythe y == 11 Hh || II (( xx ,, ythe y )) ** NN || ,,

其中,σn是计算得到的噪声的方差,W和H分别表示图像I的宽度和高度,N为掩模算子。Among them, σ n is the variance of the calculated noise, W and H represent the width and height of the image I, respectively, and N is the mask operator.

在实施中,为了能够使用引导滤波器对图像噪声进行自动平滑处理,本发明设计了一种自适应引导滤波器。其中,局部窗口ωk的半径r和正则化参数ε是影响滤波效果的关键因素。而ε和r的值是由图像中噪声的严重程度来决定的,可以使用以下方法快速估计噪声的严重程度。In practice, in order to use the guided filter to automatically smooth the image noise, the present invention designs an adaptive guided filter. Among them, the radius r of the local window ω k and the regularization parameter ε are the key factors affecting the filtering effect. While the values of ε and r are determined by the severity of the noise in the image, the following method can be used to quickly estimate the severity of the noise.

为了准确的确定ε和r的数值,首先用以下公式计算噪声的方差:In order to accurately determine the values of ε and r, first use the following formula to calculate the variance of the noise:

σσ nno == ππ 22 11 66 (( WW -- 22 )) (( Hh -- 22 )) ΣΣ xx == 11 WW ΣΣ ythe y == 11 Hh || II (( xx ,, ythe y )) ** NN ||

其中,σn是计算得到的噪声的方差,W和H分别表示图像I的宽度和高度,I(x,y)则表示图像I中的每个像素,N是一个具有以下形式的掩模算子:where σ n is the variance of the calculated noise, W and H represent the width and height of the image I, respectively, I(x,y) represents each pixel in the image I, and N is a mask algorithm of the form son:

由于噪声方差σn和窗口ωk的半径r存在r=aσn+b的线性关系,因此可以根据求出的噪声方差σn,确定半径r。Since the noise variance σ n and the radius r of the window ω k have a linear relationship of r=aσ n +b, the radius r can be determined according to the calculated noise variance σ n .

在将预设参数a和b代入上述公式,可以得到r和σn的具体关系为r=0.2σnAfter substituting the preset parameters a and b into the above formula, the specific relationship between r and σ n can be obtained as r=0.2σ n ,

又由于r和σn是非零偶数,因此进一步化简可得到:And since r and σ n are non-zero even numbers, further simplification can be obtained:

而为简化ε和r之间的关系,经过大量数据实验最终确定ε和r之间的关系为In order to simplify the relationship between ε and r, after a large number of data experiments, it is finally determined that the relationship between ε and r is

ϵϵ == (( rr 2020 )) 22 ,,

在该关系下,根据的得到的正则化参数ε和窗口半径r就可以确定自适应滤波器进行平滑处理的效果。Under this relationship, the smoothing effect of the adaptive filter can be determined according to the obtained regularization parameter ε and window radius r.

可选的,如图4所示,步骤104具体包括:Optionally, as shown in FIG. 4, step 104 specifically includes:

401、根据第二通道数值对所述转换后的待处理书法作品进行二值化处理,得到二值化的第二模板。401. Perform binarization processing on the converted calligraphic work to be processed according to the value of the second channel to obtain a second binarized template.

在实施中,具体方式步骤一、第二通道数值为L通道的平均像素值;In implementation, the specific method step 1, the second channel value is the average pixel value of the L channel;

uu LL == ΣΣ xx == 11 ,, ythe y == 11 xx == Mm ,, ythe y == NN LL (( xx ,, ythe y )) Mm ×× NN ,,

其中,uL表示L通道的平均像素值,L(x,y)表示L通道在坐标为(x,y)处像素的像素值,M和N分别表示图像的长度和宽度。Among them, u L represents the average pixel value of the L channel, L(x, y) represents the pixel value of the pixel of the L channel at coordinates (x, y), and M and N represent the length and width of the image, respectively.

步骤二、假设对碑图像的L通道进行二值化处理的最优阈值为T,则统计L通道中像素值大于T的像素占图像的比例wL,1以及L通道中像素值小于等于T的像素占图像的比例wL,2,并计算L通道中像素值大于T的像素的平均像素值uL,1以及L通道中像素值小于等于T的像素的平均像素值uL,2Step 2. Assuming that the optimal threshold for binarizing the L channel of the stele image is T, then count the proportion of pixels in the L channel whose pixel value is greater than T to the image w L,1 and the pixel value in the L channel less than or equal to T The proportion w L,2 of the pixels in the image, and calculate the average pixel value u L,1 of the pixels whose pixel value is greater than T in the L channel and the average pixel value u L,2 of the pixels whose pixel value is less than or equal to T in the L channel;

ww LL ,, 11 == WW LL ,, 11 Mm ×× NN

ww LL ,, 22 == WW LL ,, 22 Mm ×× NN

uu LL ,, 11 == ΣiΣ i ×× nno (( ii )) WW LL ,, 11 ,, ii >> TT

uu LL ,, 22 == ΣiΣ i ×× nno (( ii )) WW LL ,, 22 ,, ii ≤≤ TT

其中,WL,1和WL,2分别表示L通道中像素值大于T的像素数和像素值小于等于T的像素数,i表示图像中像素的像素值,n(i)表示像素值等于i的像素数。Among them, W L,1 and W L,2 respectively represent the number of pixels whose pixel value is greater than T and the number of pixels whose pixel value is less than or equal to T in the L channel, i represents the pixel value of the pixel in the image, and n(i) represents the pixel value equal to The number of pixels for i.

步骤三、遍历T的每一种可能的取值,使用以下公式计算类间差异值Step 3. Traverse every possible value of T, and use the following formula to calculate the inter-class difference value

GL=wL,1×(uL,1-uL)2+wL,2×(uL,2-uL)2 G L =w L,1 ×(u L,1 -u L ) 2 +w L,2 ×(u L,2 -u L ) 2

其中,GL表示二值化处理过程中目标部分和背景部分两类之间的差异值,当GL达到最大时,即可得到二值化的最佳阈值T,然后再使用下式对L通道进行二值化处理。Among them, GL represents the difference between the target part and the background part in the process of binarization. When GL reaches the maximum value, the optimal threshold T of binarization can be obtained, and then the following formula is used to compare L Channels are binarized.

根据上述步骤,二值化处理后的图像即为第二模板。According to the above steps, the binarized image is the second template.

402、根据所述转换后的待处理书法作品中第三通道数值的反相数值,结合预设的第二阈值,获取所述转换后的待处理书法作品二值化后的第三模板。402. According to the inverse value of the value of the third channel in the converted calligraphic work to be processed, combined with a preset second threshold, obtain a third template after the conversion of the calligraphic work to be processed after binarization.

在实施中,先获取帖图像中各像素在第三通道即a通道的反相数值a'In the implementation, first obtain the inverse value a' of each pixel in the third channel, that is, the a channel in the frame image

a'(x,y)=255-a(x,y),a'(x,y)=255-a(x,y),

将a'与预设的第二阈值例如110之间的大小关系对a通道进行二值化处理,得到一幅二值图像,并将其作为帖图像中印章的第三模板S;Carry out binarization processing on the a channel with the size relationship between a' and the preset second threshold such as 110 to obtain a binary image, and use it as the third template S of the stamp in the post image;

其中,S(x,y)是得到的二值图像中坐标为(x,y)处的像素值,a'(x,y)是帖图像的a通道中坐标为(x,y)处的反相像素值。Among them, S(x, y) is the pixel value at the coordinate (x, y) in the obtained binary image, a'(x, y) is the pixel value at the coordinate (x, y) in the a channel of the frame image Inverts pixel values.

403、结合所述第二模板以及所述第三模板,得到组合模板,结合所述转换后的待处理图像,通过引导滤波器提取所述帖图像中文字和印章的形质信息以及神采信息,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像。403. Combining the second template and the third template to obtain a combined template, combining the converted image to be processed, and extracting the shape and quality information and expression information of the characters and seals in the post image through a guided filter, An image containing the information of the character quality and expression in the post image and the information of the shape quality and expression of the seal is obtained.

在实施中,将帖图像的组合模板作为输入图像,以帖图像的第二通道即L通道作为引导图像,使用步骤前述的引导滤波器提取帖图像中文字和印章的形质和神采信息,以得到一副含有文字和印章的形质和神采信息的图像。In the implementation, the combined template of the post image is used as the input image, and the second channel of the post image, namely the L channel, is used as the guide image, and the aforementioned guide filter of the step is used to extract the shape and quality and the expression information of the characters and the seal in the post image, so as to Obtain an image containing the shape, quality and spirit information of characters and seals.

在对帖图像进行形质和神采信息提取的过程中,使用帖图像的组合模板作为输入图像,以帖图像的第二通道作为引导图像。该过程就是利用能够反映黑色文字的亮度最小特征的通道图像的信息对组合模板进行滤波处理,以得到既能表达文字轮廓边缘的形质信息,又能表达文字笔画的笔锋和虚实等神采信息,同样的还包括帖图像中印章的形质信息以及神采信息。In the process of extracting the shape, quality and expression information of the post image, the combined template of the post image is used as the input image, and the second channel of the post image is used as the guide image. This process is to use the information of the channel image that can reflect the minimum brightness characteristics of the black characters to filter the combined template to obtain not only the shape and quality information of the outline edge of the characters, but also the expressive information such as the strokes of the characters and the virtual reality. The same also includes the shape and quality information and expression information of the seal in the post image.

可选的,所述结合所述第二模板以及所述第三模板,得到组合模板,包括:Optionally, the combination of the second template and the third template to obtain a combined template includes:

根据第三公式得到组合模板,所述第三公式为According to the third formula, the combination template is obtained, and the third formula is

其中,CS(x,y)是得到的组合模板中坐标为(x,y)处的像素值,C(x,y)是帖图像中文字的第二模板中坐标为(x,y)处的像素值,S(x,y)是帖图像中印章的第三模板中坐标为(x,y)处的像素值。Among them, CS(x, y) is the pixel value at the coordinate (x, y) in the obtained combination template, and C(x, y) is the coordinate at (x, y) in the second template of the text in the post image The pixel value of S(x, y) is the pixel value at coordinates (x, y) in the third template of the seal in the post image.

本实施例中提出的一种对书法作品中文字神采信息的提取方法,通过将待处理书法作品中的色彩空间进行转换,之后根据第一通道阈值对转换后的待处理书法作品的类型进行区分,根据区分后的是碑图像还是帖图像分别进行处理,以便得到碑图像中文字形质信息的图像和帖图像中文字形质、神采信息以及印章形质、神采信息的图像。避免了现有技术中仅依靠边缘检测导致的文字信息不够完整的缺陷,增加了检测到的文字信息的细节表现,提高了对书法作品研究的便利性。A method for extracting character expression information in calligraphy works proposed in this embodiment, by converting the color space in the calligraphy works to be processed, and then distinguishing the types of the converted calligraphy works to be processed according to the threshold value of the first channel According to whether the image of the stele or the image of the post is processed separately, so as to obtain the image of the character quality information in the stele image and the image of the character quality and expression information in the post image and the image of the shape quality and expression information of the seal. It avoids the defect of incomplete text information caused by only relying on edge detection in the prior art, increases the detailed expression of the detected text information, and improves the convenience of researching calligraphy works.

为了表明本方法的效果,特地设置了以下对比试验用于表明本方法相对于现有技术的优越性。In order to show the effect of this method, the following comparative experiments are specially set up to show the superiority of this method over the prior art.

对比试验:Comparative Test:

仿真1,对本发明中碑图像和帖图像中文字形质和神采信息提取方法的仿真。Simulation 1, the simulation of the method for extracting Chinese character quality and expression information from inscription images and post images of the present invention.

仿真1的仿真条件是在MATLAB R2010b软件下进行。The simulation condition of simulation 1 is carried out under MATLAB R2010b software.

参照图5,对5幅著名的具有1000年历史的碑图像进行仿真实验,如图5中T02和T03所示,这些碑图像由于受到人为的或自然的破坏,给碑图像中文字的形质信息的提取带来不利的影响。例如:在图5的T03中含有大量的噪声信息,而使用本发明提出的自适应引导滤波器对原始碑图像进行平滑处理,然后对平滑后图像再进行二次引导滤波,即可以较为完整、准确地提取出碑图像中文字的形质信息,如图5中T02,T03,T04和T05所示。Referring to Figure 5, a simulation experiment is carried out on five famous stele images with a history of 1,000 years, as shown in T02 and T03 in Figure 5, these stele images are damaged by man-made or natural, and the shape and quality of the characters in the stele images are changed. The extraction of information has adverse effects. For example: T03 in Fig. 5 contains a large amount of noise information, but use the self-adaptive guiding filter that the present invention proposes to carry out smoothing processing to original stele image, then carry out secondary guiding filtering to the smoothed image again, promptly can relatively complete, Accurately extract the shape and quality information of the characters in the stele image, as shown in T02, T03, T04 and T05 in Figure 5.

参照图6,对4幅具有1000多年历史的帖图像进行仿真实验。在这些保存较为完好的帖图像中,“文房四宝”的印记显而易见。如图6所示的模板CS中包含了文字的主要形质信息和印章的少量形质信息,在图6中W02和W04中较为明显。然后使用引导滤波器进行进一步滤波处理,不仅可获得更为完整的形质信息。而且也能将反应神采信息的笔墨浓淡变化和力度变化提取出来。在这些结果图像中,书法家在书写文字时笔画的虚实以及笔锋的突变和渐变都能够较好地展示。此外,印章的细节信息也能够较为清晰完整地提取出来。Referring to Fig. 6, a simulation experiment is carried out on four post images with a history of more than 1000 years. In these relatively well-preserved post images, the imprint of the "Four Treasures of the Study" is obvious. The template CS shown in Figure 6 contains the main form and quality information of the characters and a small amount of form and quality information of the seal, which is more obvious in W02 and W04 in Figure 6 . Then use the guided filter for further filtering, not only can obtain more complete shape and quality information. Moreover, it is also possible to extract the changes in the thickness and intensity of pen and ink that reflect the information of spirit. In these resulting images, the calligrapher's strokes can be well displayed when writing characters, as well as the sudden change and gradual change of strokes. In addition, the detailed information of the seal can also be extracted clearly and completely.

仿真2,对本发明方法和现有的碑图像和帖图像中文字信息提取方法进行对比分析的仿真。Simulation 2 is a simulation of comparing and analyzing the method of the present invention and the existing methods for extracting text information in stele images and post images.

仿真2的仿真条件是在MATLAB R2010b软件下进行。引导滤波器的参数ε=0.110和r=10。本发明方法主要与Otsu’s objective functions(GA-Otsu),two-dimensional(2D)Otsu,Fast Fuzzy C-means(FFCM)以及Fractional-OrderDarwinian Particle Swarm Optimization based image segmentation(FODPSO)进行对比分析,以证明出本发明方法在书法作品艺术信息提取方面,特别是对帖图像中文字的神采信息的提取方面具有显著优势。实验结果的对比与分析描述如下:The simulation condition of simulation 2 is carried out under MATLAB R2010b software. Parameters ε=0.110 and r= 10 of the bootstrap filter. The method of the present invention is mainly compared and analyzed with Otsu's objective functions (GA-Otsu), two-dimensional (2D) Otsu, Fast Fuzzy C-means (FFCM) and Fractional-Order Darwinian Particle Swarm Optimization based image segmentation (FODPSO), to prove that The method of the invention has significant advantages in the extraction of artistic information of calligraphy works, especially in the extraction of the charm information of characters in post images. The comparison and analysis of the experimental results are described as follows:

参照图7,对于碑图像,主要的目标是降低碑图像中含有的噪声信息,并提取碑图像中文字的形质信息。在本实验中,选取GA-Otsu,2D Otsu和FFCM与本发明方法进行对比分析。为统一提取结果的视觉效果,使用Otsu方法对本发明方法得到的结果进行后处理。如图7所示,由GA-Otsu和FFCM得到的结果容易受到噪声的干扰,导致提取文字的形质信息不清晰,图7中T02和T03的结果也表明了这一点,而2D Otsu在含有较小噪声的图像中能够得到更好的效果。与之形成对比的是在本发明方法得到的结果的基础上进行Otsu后处理的结果是最好的。图像中含有的各种类型的噪声几乎完全被去除,文字的细节信息提取的更为完整。Referring to Fig. 7, for the stele image, the main goal is to reduce the noise information contained in the stele image and extract the shape and quality information of the characters in the stele image. In this experiment, GA-Otsu, 2D Otsu and FFCM were selected for comparative analysis with the method of the present invention. In order to unify the visual effects of the extraction results, the Otsu method is used to post-process the results obtained by the method of the present invention. As shown in Figure 7, the results obtained by GA-Otsu and FFCM are easily disturbed by noise, resulting in unclear shape and quality information of the extracted text. The results of T02 and T03 in Figure 7 also show this point, while 2D Otsu has Better results can be obtained in images with less noise. In contrast, the result of Otsu post-processing based on the results obtained by the method of the present invention is the best. Various types of noise contained in the image are almost completely removed, and the detailed information of the text is extracted more completely.

参照图8,对于帖图像,应当同时考虑文字的形质信息和神采信息的提取。首先,对于文字的形质信息提取方面,如图8所示,所有的方法都能够提取帖图像中文字的主要形质信息。但是本发明方法能够保持更多的细节信息,如图8中W01所示。对于印章的提取,如果印章颜色信息中的亮度较高,GA-Otsu,2DOtsu,和FFCM几乎就不能提取印章信息;随着印章颜色信息中的亮度逐渐变暗,这三种方法提取的印章信息就越准确,但是所提取到的信息仍旧不完整、不清晰。然而在任何情况下,本发明方法都可以提取大多数的印章信息,并且能够保持较多的细节信息。Referring to Fig. 8, for the post image, the extraction of the shape and quality information and the expression information of the characters should be considered at the same time. First of all, regarding the extraction of shape and quality information of characters, as shown in Figure 8, all methods can extract the main shape and quality information of characters in post images. However, the method of the present invention can keep more detailed information, as shown by W01 in FIG. 8 . For the extraction of stamps, if the brightness in the stamp color information is high, GA-Otsu, 2DOtsu, and FFCM can hardly extract the stamp information; as the brightness in the stamp color information gradually becomes darker, the seal information extracted by these three methods more accurate, but the extracted information is still incomplete and unclear. However, in any case, the method of the present invention can extract most of the seal information, and can keep more detailed information.

其次,对于帖图像中文字的神采信息提取方面,本发明方法与使用多种强度值表达图像的多级图像分割方法(FFCM(c=3),FFCM(c=4),三级FODPSO和四级FODPSO进行对比分析。如图9所示,FFCM(c=4)和四级FODPSO能够得到较好的效果,他们不仅能够捕获较多的形质信息,并且能够表达更多的神采信息,如图9中W03和W04所示。FFCM(c=3)和三级FODPSO虽然含有较少的噪声信息,但他们在抑制噪声的同时,又丢失了一些有用信息,如在图9中W01的结果中没有印章信息;并且提取结果粗糙,神采信息不清晰,特别是没有准确提取出笔墨浓度的变化。与之形成对比的是,本发明方法在形质信息和神采信息的提取方面具有较高的准确性,如图9中W04所示,笔迹间墨汁的突变和渐变效果完全被显示出来,并且,几乎所有类型的噪声也被去除。Secondly, regarding the extraction of the expression information of the characters in the post image, the method of the present invention is compatible with the multi-level image segmentation method (FFCM (c=3), FFCM (c=4), three-level FODPSO and four-level image segmentation method using multiple intensity values to express the image) The FFCM (c=4) and the four-level FODPSO can be compared and analyzed. As shown in Figure 9, FFCM (c=4) and the four-level FODPSO can get better results. They can not only capture more shape and quality information, but also express more information, such as Shown in W03 and W04 in Figure 9. Although FFCM (c=3) and three-stage FODPSO contain less noise information, they lose some useful information while suppressing the noise, as shown in the results of W01 in Figure 9 There is no seal information in; and the extraction result is rough, and the expression information is not clear, especially does not extract the change of pen and ink density accurately.In contrast, the inventive method has higher extraction aspect quality information and expression information Accuracy, as shown by W04 in Figure 9, the mutation and gradation effect of the ink between handwritings is fully displayed, and almost all types of noise are also removed.

对于碑图像中文字信息的提取主要关注文字形质信息的提取,为能够对各方法的实验结果进行定量描述,使用由专业人员手工提取的含有文字的真实形质信息的图像作为度量标准(如图10所示),并使用错分类误差(misclassificationerror(ME))作为度量指标。ME定义如下:The extraction of text information in stele images mainly focuses on the extraction of text shape and quality information. In order to be able to quantitatively describe the experimental results of each method, images containing real shape and quality information of text extracted manually by professionals are used as metrics (such as shown in Figure 10), and use the misclassification error (misclassification error (ME)) as the metric. ME is defined as follows:

MEME == 11 -- || BB 00 ∩∩ BB TT || ++ || Ff 00 ∩∩ Ff TT || || BB 00 || ++ || Ff 00 ||

其中B0和F0分别表示含有文字真实形质信息的图像中的背景和前景,BT和FT表示结果图像中背景和前景的区域像素。|·|表示集合中元素的个数。Among them, B 0 and F 0 respectively represent the background and foreground in the image containing the true shape and quality information of the text, and B T and F T represent the area pixels of the background and foreground in the resulting image. |·|Represents the number of elements in the set.

如表1所示,本发明方法相对于其他方法在提取结果的准确性方面具有更好的优越性,本发明方法得到的结果具有最小的ME值。与之形成对比的是,GA-Otsu得到的结果最差,具有最大的ME值(ME值为0.0355)。FFCM比GA-Otsu较优,而比2D Otsu较差。在实验结果的稳定性方面,本发明方法能够得到最稳定的结果,它具有最小的标准差(0.0078)。因此,实验结果表明本发明方法是提取碑图像和帖图像中文字的形质信息最精确最稳定的方法。As shown in Table 1, compared with other methods, the method of the present invention has better advantages in the accuracy of the extraction results, and the result obtained by the method of the present invention has the smallest ME value. In contrast, GA-Otsu obtained the worst results with the largest ME value (ME value 0.0355). FFCM is better than GA-Otsu, but worse than 2D Otsu. In terms of the stability of the experimental results, the method of the present invention can obtain the most stable results with the smallest standard deviation (0.0078). Therefore, the experimental results show that the method of the present invention is the most accurate and stable method for extracting shape and quality information of characters in stele images and post images.

表1 基于标准图的不同方法得到结果的ME值Table 1 ME values of the results obtained by different methods based on the standard graph

从各仿真实验的结果可以看出,本发明方法使用了引导滤波器,不仅能够较好地消除图像中存在的多种噪声,而且还能够较准确、完整地提取碑图像和帖图像中文字的形质和神采信息。与其他文字信息提取方法相比,本发明方法对于碑图像和帖图像中文字的形质和神采信息的提取具有更好的效果。From the results of each simulation experiment, it can be seen that the method of the present invention uses a guided filter, which not only can better eliminate various noises existing in the image, but also can more accurately and completely extract the characters in the stele image and the post image. Form and appearance information. Compared with other text information extraction methods, the method of the present invention has a better effect on the extraction of the shape, quality and spirit information of the text in the stele image and post image.

需要说明的是:上述实施例提供的提取方法进行文字信息提取的实施例,仅作为该固定装置中在实际应用中的说明,还可以根据实际需要而将上述提取方法在其他应用场景中使用,其具体实现过程类似于上述实施例,这里不再赘述。It should be noted that: the embodiment of extracting text information provided by the extraction method provided in the above embodiment is only used as an illustration of the actual application of the fixed device, and the above extraction method can also be used in other application scenarios according to actual needs. Its specific implementation process is similar to the above-mentioned embodiment, and will not be repeated here.

上述实施例中的各个序号仅仅为了描述,不代表各部件的组装或使用过程中得先后顺序。The serial numbers in the above embodiments are for description only, and do not represent the sequence of the components during assembly or use.

以上所述仅为本发明的实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present invention shall be included in the protection scope of the present invention Inside.

Claims (7)

1.一种对书法作品中文字神采信息的提取方法,其特征在于,所述方法包括:1. a kind of extraction method to the information of character expression in calligraphy works, it is characterized in that, described method comprises: 获取待处理书法作品,将所述待处理书法作品的色彩空间转换至预设色彩空间,得到转换后的待处理书法作品;Obtaining the calligraphic works to be processed, converting the color space of the calligraphic works to be processed to a preset color space, and obtaining the converted calligraphic works to be processed; 提取所述转换后的待处理书法作品中的第一通道数值,根据第一阈值对所述转换后的待处理书法作品的类型进行区分;Extracting the first channel value in the converted calligraphic works to be processed, and distinguishing the types of the converted calligraphic works to be processed according to the first threshold; 如果所述转换后的待处理书法作品为碑图像,则通过引导滤波器对所述转换后的待处理书法作品进行处理,得到含有碑图像文字形质信息的图像;If the calligraphic work to be processed after the conversion is a stele image, the calligraphic work to be processed after the conversion is processed through a guide filter to obtain an image containing the shape and quality information of the stele image; 如果所述转换后的待处理书法作品为帖图像,通过所述引导滤波器对所述转换后的待处理书法作品进行处理,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像。If the calligraphic work to be processed after the conversion is a post image, the calligraphic work to be processed after the conversion is processed through the guide filter, and the information containing the character quality and expression in the post image and the shape quality and expression information of the seal are obtained Image. 2.根据权利要求1所述的方法,其特征在于,所述提取所述转换后的待处理书法作品中的第一通道数值,根据第一阈值对所述待处理书法作品的类型进行区分,包括:2. method according to claim 1, is characterized in that, described extracting the first channel value in the calligraphic work to be processed after described conversion, according to the first threshold value the type of described calligraphic work to be processed is distinguished, include: 对所述转换后的待处理书法作品进行处理,确定所述转换后的待处理书法作品中的第一通道数值;Processing the converted calligraphic works to be processed, and determining the first channel value in the converted calligraphic works to be processed; 根据所述第一通道数值,确定用于区分所述转换后的待处理书法作品中前景和背景的第一阈值;According to the first channel value, determine a first threshold for distinguishing the foreground and background in the converted calligraphic work to be processed; 如果所述第一阈值大于或等于预设阈值,则所述转换后的待处理书法作品为帖图像;If the first threshold is greater than or equal to a preset threshold, the converted calligraphy work to be processed is a post image; 如果所述第一阈值小于预设阈值,则所述转换后的待处理书法作品为碑图像。If the first threshold is smaller than the preset threshold, the converted calligraphic work to be processed is a stele image. 3.根据权利要求1所述的方法,其特征在于,所述如果所述转换后的待处理书法作品为碑图像,则通过引导滤波器对所述碑图像进行处理,得到含有碑图像文字形质信息的图像,包括:3. The method according to claim 1, wherein, if the converted calligraphy work to be processed is a stele image, then the stele image is processed by a guide filter to obtain the character font containing the stele image images of qualitative information, including: 根据第二通道数值对所述转换后的待处理书法作品进行二值化处理,得到二值化的第一模板;Binarize the converted calligraphic works according to the value of the second channel to obtain a first template of binarization; 利用引导滤波器对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板;Using a guide filter to perform denoising processing on the converted calligraphic works to obtain a smooth template after denoising; 根据所述第一模板,结合所述去噪后的平滑模板,通过所述引导滤波器提取所述碑图像中文字的形质信息,得到所述含有碑图像文字形质信息的图像。According to the first template, combined with the denoised smooth template, the shape and quality information of the characters in the stele image is extracted through the guide filter, and the image containing the shape and quality information of the characters in the stele image is obtained. 4.根据权利要求3所述的方法,其特征在于,所述利用引导滤波器对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板,具体包括:4. method according to claim 3, it is characterized in that, described utilizing guiding filter to carry out denoising process to the calligraphy work to be processed after described conversion, obtains the smooth template after denoising, specifically comprises: 根据第一公式对所述转换后的待处理书法作品进行去噪处理,得到去噪后的平滑模板,所述第一公式为According to the first formula, the calligraphic work to be processed after the conversion is denoised to obtain a smooth template after denoising, and the first formula is: II Oo (( xx ,, ythe y )) == aa kk II gg (( xx ,, ythe y )) ++ bb kk ,, ∀∀ (( xx ,, ythe y )) ∈∈ ωω kk ,, 其中,IO(x,y)是滤波变换后图像中坐标位置为(x,y)处的像素值,ak和bk是线性系数,Ig(x,y)是引导图像中坐标位置为(x,y)处的像素值,ωk是以像素点(x,y)为中心,半径为r的一个局部窗口。Among them, I O (x, y) is the pixel value at the coordinate position (x, y) in the image after filter transformation, a k and b k are linear coefficients, and I g (x, y) is the coordinate position in the guide image is the pixel value at (x, y), and ω k is a local window with the pixel point (x, y) as the center and radius r. 5.根据权利要求4所述的方法,其特征在于,所述方法还包括:5. method according to claim 4, is characterized in that, described method also comprises: 使用第二公式对所述平滑模板进行二次去噪处理,得到二次去噪后的平滑模板,所述第二公式为Using the second formula to perform secondary denoising processing on the smooth template to obtain a smooth template after secondary denoising, the second formula is σσ nno == ππ 22 11 66 (( WW -- 22 )) (( Hh -- 22 )) ΣΣ xx == 11 WW ΣΣ ythe y == 11 Hh || II (( xx ,, ythe y )) ** NN || ,, 其中,σn是计算得到的噪声的方差,W和H分别表示图像I的宽度和高度,N为掩模算子。Among them, σ n is the variance of the calculated noise, W and H represent the width and height of the image I, respectively, and N is the mask operator. 6.根据权利要求1所述的方法,其特征在于,如果所述转换后的待处理书法作品为帖图像,通过所述引导滤波器对所述转换后的待处理书法作品进行处理,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像,包括:6. method according to claim 1, it is characterized in that, if the calligraphic work to be processed after described conversion is post image, process the calligraphic work to be processed after described conversion by described guidance filter, obtain containing The image of the character quality and expression information in the post image and the image of the seal shape quality and expression information, including: 根据第二通道数值对所述转换后的待处理书法作品进行二值化处理,得到二值化的第二模板;performing binarization processing on the converted calligraphic works to be processed according to the value of the second channel to obtain a second template of binarization; 根据所述转换后的待处理书法作品中第三通道数值的反相数值,结合预设的第二阈值,获取所述转换后的待处理书法作品二值化后的第三模板;According to the inverted value of the third channel value in the converted calligraphic work to be processed, combined with the preset second threshold value, the third template after binarization of the converted calligraphic work to be processed is obtained; 结合所述第二模板以及所述第三模板,得到组合模板,结合所述转换后的待处理图像,通过引导滤波器提取所述帖图像中文字和印章的形质信息以及神采信息,得到含有帖图像中文字形质、神采信息以及印章形质、神采信息的图像。Combining the second template and the third template to obtain a combined template, combining the converted image to be processed, extracting the shape and quality information and the expression information of the characters and seals in the post image through a guided filter, and obtaining the image containing The image of the character quality and color information in the post image and the image of the seal shape quality and color information. 7.根据权利要求6所述的方法,其特征在于,所述结合所述第二模板以及所述第三模板,得到组合模板,包括:7. The method according to claim 6, wherein the combination of the second template and the third template to obtain a combined template comprises: 根据第三公式得到组合模板,所述第三公式为According to the third formula, the combination template is obtained, and the third formula is 其中,CS(x,y)是得到的组合模板中坐标为(x,y)处的像素值,C(x,y)是帖图像中文字的第二模板中坐标为(x,y)处的像素值,S(x,y)是帖图像中印章的第三模板中坐标为(x,y)处的像素值。Among them, CS(x, y) is the pixel value at the coordinate (x, y) in the obtained combination template, and C(x, y) is the coordinate at (x, y) in the second template of the text in the post image The pixel value of S(x, y) is the pixel value at coordinates (x, y) in the third template of the seal in the post image.
CN201510080291.4A 2015-02-13 2015-02-13 A kind of extracting method to word expression information in calligraphy work Active CN104834890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510080291.4A CN104834890B (en) 2015-02-13 2015-02-13 A kind of extracting method to word expression information in calligraphy work

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510080291.4A CN104834890B (en) 2015-02-13 2015-02-13 A kind of extracting method to word expression information in calligraphy work

Publications (2)

Publication Number Publication Date
CN104834890A true CN104834890A (en) 2015-08-12
CN104834890B CN104834890B (en) 2018-01-05

Family

ID=53812768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510080291.4A Active CN104834890B (en) 2015-02-13 2015-02-13 A kind of extracting method to word expression information in calligraphy work

Country Status (1)

Country Link
CN (1) CN104834890B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373798A (en) * 2015-11-20 2016-03-02 西北大学 A calligraphy character extraction method based on K-nearest neighbor matting and mathematical morphology
CN105404885A (en) * 2015-10-28 2016-03-16 北京工业大学 Two-dimensional character graphic verification code complex background noise interference removal method
CN106156794A (en) * 2016-07-01 2016-11-23 北京旷视科技有限公司 Character recognition method based on writing style identification and device
CN106446920A (en) * 2016-09-05 2017-02-22 电子科技大学 Stroke width transformation method based on gradient amplitude constraint
CN107403405A (en) * 2016-05-20 2017-11-28 富士通株式会社 Image processing apparatus, image processing method and information processor
CN108764070A (en) * 2018-05-11 2018-11-06 西北大学 A kind of stroke dividing method and calligraphic copying guidance method based on writing video
CN110533049A (en) * 2018-05-23 2019-12-03 富士通株式会社 The method and apparatus for extracting seal image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920819A (en) * 2006-09-14 2007-02-28 浙江大学 Writing brush calligraphy character seach method
CN101635099A (en) * 2008-07-22 2010-01-27 张炳煌 Chinese dynamic copybook writing system
CN103077516A (en) * 2012-12-31 2013-05-01 温佩芝 Digital rubbing method for stone inscription characters

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920819A (en) * 2006-09-14 2007-02-28 浙江大学 Writing brush calligraphy character seach method
CN101635099A (en) * 2008-07-22 2010-01-27 张炳煌 Chinese dynamic copybook writing system
CN103077516A (en) * 2012-12-31 2013-05-01 温佩芝 Digital rubbing method for stone inscription characters

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHENG S C等: "subpixel edge detection of color images by principal axis analysis and moment-preserving principle", 《PATTERN RECOGNITION》 *
朱雷: "古籍手写汉字图像分割算法研究", 《中国优秀硕士学位论文全文数据库》 *
赵琪: "书法碑帖文字的笔划提取技术及其实现", 《中国优秀硕士学位论文全文数据库》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404885A (en) * 2015-10-28 2016-03-16 北京工业大学 Two-dimensional character graphic verification code complex background noise interference removal method
CN105404885B (en) * 2015-10-28 2019-03-22 北京工业大学 A kind of two dimension character graphics identifying code complex background noise jamming minimizing technology
CN105373798A (en) * 2015-11-20 2016-03-02 西北大学 A calligraphy character extraction method based on K-nearest neighbor matting and mathematical morphology
CN105373798B (en) * 2015-11-20 2018-08-28 西北大学 One kind scratching figure and the morphologic writing brush word extracting method of mathematics based on k nearest neighbor
CN107403405A (en) * 2016-05-20 2017-11-28 富士通株式会社 Image processing apparatus, image processing method and information processor
CN106156794A (en) * 2016-07-01 2016-11-23 北京旷视科技有限公司 Character recognition method based on writing style identification and device
CN106446920A (en) * 2016-09-05 2017-02-22 电子科技大学 Stroke width transformation method based on gradient amplitude constraint
CN106446920B (en) * 2016-09-05 2019-10-01 电子科技大学 A kind of stroke width transform method based on gradient amplitude constraint
CN108764070A (en) * 2018-05-11 2018-11-06 西北大学 A kind of stroke dividing method and calligraphic copying guidance method based on writing video
CN108764070B (en) * 2018-05-11 2021-12-31 西北大学 Stroke segmentation method based on writing video and calligraphy copying guidance method
CN110533049A (en) * 2018-05-23 2019-12-03 富士通株式会社 The method and apparatus for extracting seal image
CN110533049B (en) * 2018-05-23 2023-05-02 富士通株式会社 Method and device for extracting seal image

Also Published As

Publication number Publication date
CN104834890B (en) 2018-01-05

Similar Documents

Publication Publication Date Title
CN104834890B (en) A kind of extracting method to word expression information in calligraphy work
CN107316077B (en) An automatic fat cell counting method based on image segmentation and edge detection
CN103198315B (en) Based on the Character Segmentation of License Plate of character outline and template matches
CN110619642B (en) Method for separating seal and background characters in bill image
US9251614B1 (en) Background removal for document images
CN106096610B (en) A kind of file and picture binary coding method based on support vector machines
CN101599125A (en) A Binarization Method for Image Processing under Complicated Background
CN105374015A (en) Binary method for low-quality document image based on local contract and estimation of stroke width
US20130242354A1 (en) Method for simulating impact printer output
CN104463195A (en) Printing style digital recognition method based on template matching
CN105069788B (en) A kind of ancient architecture wall topic note is got dirty writing brush character image cluster segmentation method
CN104156941A (en) Method and system for determining geometric outline area on image
Shaikh et al. A novel approach for automatic number plate recognition
CN108764328A (en) The recognition methods of Terahertz image dangerous material, device, equipment and readable storage medium storing program for executing
CN104143199B (en) Image processing method for color laser marking
CN102842046B (en) A kind of calligraphic style recognition methods of extracting based on global characteristics and training
CN105373798B (en) One kind scratching figure and the morphologic writing brush word extracting method of mathematics based on k nearest neighbor
CN113538498A (en) Seal image segmentation method based on local binarization, electronic device and readable storage medium
CN112508024A (en) Intelligent identification method for embossed seal font of electrical nameplate of transformer
CN101567049B (en) Method for processing noise of half tone document image
CN112070684B (en) Method for repairing characters of a bone inscription based on morphological prior features
CN106446920A (en) Stroke width transformation method based on gradient amplitude constraint
CN105701816A (en) Automatic image segmentation method
CN118429242A (en) Image analysis method and system based on deep learning
CN110807747B (en) Document image noise reduction method based on foreground mask

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant