CN104463171A - Seal inscription extraction method based on PCNN - Google Patents

Seal inscription extraction method based on PCNN Download PDF

Info

Publication number
CN104463171A
CN104463171A CN201410746225.1A CN201410746225A CN104463171A CN 104463171 A CN104463171 A CN 104463171A CN 201410746225 A CN201410746225 A CN 201410746225A CN 104463171 A CN104463171 A CN 104463171A
Authority
CN
China
Prior art keywords
image
carried out
printed text
seal
binaryzation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410746225.1A
Other languages
Chinese (zh)
Inventor
彭德中
章毅
吕建成
张蕾
张海仙
桑永胜
郭际香
毛华
胡鹏
林毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201410746225.1A priority Critical patent/CN104463171A/en
Publication of CN104463171A publication Critical patent/CN104463171A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses a seal inscription extraction method based on a PCNN. The method comprises the steps that (1) binaryzation is carried out on an original seal inscription image to obtain a binary seal inscription image easier to process; (2) continuous erosion and dilation are carried out on the binary seal inscription image to be processed to reconstruct the image; (3) padding is carried out on the reconstructed image to obtain a padded image Image1, and the padded image is subtracted from the reconstructed image to obtain another Image2 to be processed; (4) the two types of images are input into a pulse coupling neural network model to obtain a corresponding output result image; (5) refining is carried out on the output result image according to a corresponding refining result formula to obtain a final refining result image. The method overcomes defects of traditional manual seal recognition, and an ideal result can be obtained.

Description

The printed text extracting method of a kind of Based PC NN
Technical field
The invention belongs to the applied technical field of Digital Image Processing, relate to the printed text extracting method of a kind of Based PC NN.
Background technology
The retrieval of image and relevant processing and identification, present today of explosive increase in quantity of information, more and more highlight its important effect in the social production and daily life of people.The mankind are mainly directly obtained and exchange information by this medium of image.Along with developing rapidly of computer technology and artificial intelligence, the process of digital picture is just being become to the focus of society and academia's research.Retrieval process mode traditionally for image is the shape of being observed image by human eye, size, the information such as color, the central nervous system then through the mankind makes differentiation to it, discrimination is very high, but it is but difficult to the requirement of satisfied current every application in processing speed.
Carry out processing in artificial intelligence for digital picture, machine learning, Digital Image Processing, the association areas such as computer vision have very important application, become the concern direction of Chinese scholars gradually.In recent years, the model having one to be referred to as the artificial neural network of Pulse Coupled Neural Network PCNN (Pulse Coupled Neural Networks) being applied in image procossing just gradually.PCNN model is a kind of biological network model of closely human brain neural network, and at image procossing, image recognition, decision optimization aspect also exists advantage that is different and traditional artificial neural network, has broad application prospects.It simulates biological visual characteristic, has biological background context, is referred to as third generation neural network.As far back as 1999, the Izhikevich mathematically actual biological cell model of Strict Proof was consistent with PCNN model, the coordinate of different just variable.Research shows, the fundamental characteristics that PCNN has has change threshold property, non-linear modulation characteristic, and synchronizing pulse provides phenomenon, capture characteristic, dynamic pulse provides phenomenon, auto-wave characteristic and comprehensive space-time characterisation, image denoising can be applied to, Iamge Segmentation, Image Edge-Detection etc.The important attribute of the synchronous and auto-wave that PCNN has, obtain the concern of researchist, the relevant treatment for digital picture has good effect.
Seal image is because the attribute of its carrying, so it is generally the place appearing at complex background medium, this adds very large difficulty also to the identification of printed text.Further comprises in its background some other with the structure of printed text image similarity, color and the printed text image of these contents are closely similar, cause the difficulty of printed text identification equally.
The key step of the extraction of printed text is as follows: the process first the printed text image of original RGB color space being carried out to binaryzation, image is made to become simple, therefore the data message of image can reduce, but but can highlight the objective contour that we need, then the profile skeleton of printed text is extracted, the binaryzation printed text image allowing the skeleton that extracts and original image change into merges, final so just the principal character structure of printed text to extracting.The template image of printed text and image to be compared are all needed to carry out such relevant treatment, like this image is converted into the feature that computing machine can understand, compares according to feature, finally draw the result of discriminating.
Original image is an image based on RGB color space, so by the api function inside this cross-platform computer vision storehouse of OpenCV, original image can be realized very easily to be converted to gray level image, then the image of binaryzation greyscale image transitions being become better process is continued, the image of binaryzation can greatly reduce process information, simplifies problem.Due to the singularity of this body structure of printed text, can suspect, carry out the structural cavity that the image after binary conversion treatment can exist some, be unfavorable for the process to image, morphologic continuous corrosion at this time should be taked to operate, morphologic corrosion is to getting rid of by some structural elements in image, corrosion can reduce and refinement binary image in part, tiny border and structure will be given in printed text filters totally, carry out continuous print etching operation, also to carry out continuous print expansive working to image, different with corrosion, expansion alligatoring can increase the part in binary image to a certain extent in other words, reinvent image, then final image is carried out to the extraction on border, this makes it possible to the satisfactory result images obtained.
The background area that the border that may be defined as being connected by foreground pixel, a cavity in image surrounds.It is also several conventional way of image procossing to the filling in cavity.
Size is the bianry image F (suppose that in image, the gray-scale value of background is 1, the gray-scale value of object is 0) of m × n, and setting F (i, j) represents the gray-scale value of (i, j) individual pixel.Before image F is input to neural network, pre-service is carried out to the gray-scale value F (i, j) of pixel:
M ( F ( i , j ) ) = F ( i , j ) , 1 < i < m , 1 < j < n 2 , otherwise - - - ( 1 )
Then F is input to until do not have new neuron firing in network, the result so finally obtained is exactly the result that cavity is filled.Being operating as next to image framework refinement of this step, then carrying out printed text extraction has important effect.
So-called image thinning, refers to a kind of operation to binary image skeletonizing.Refinement is exactly the stripping carrying out from level to level to image, some unessential parts in image to elimination, but the most important thing is the general shape that still keeps image original.This extraction for printed text image plays a significant role.A kind of method that printed text skeleton can be extracted fast from the printed text background of complexity is not yet found that there is in prior art.
Summary of the invention
The object of the invention is to the defect overcoming the existence of above-mentioned technology, provide the printed text extracting method of a kind of Based PC NN, the thinning algorithm of employing can extract printed text skeleton fast from the printed text background of complexity.Its concrete technical scheme is:
A printed text extracting method of Based PC NN, comprises the following steps:
1) original printed text image is carried out to the process of binaryzation, obtain the binaryzation printed text image of better process.
2) continuous print corrosion and expansive working are carried out to pending binaryzation printed text image, reinvent image.
3) padding is carried out for the image reinvented, obtain blank map as Image1, the figure image subtraction blank map picture reinvented is obtained pending another image Image2.
4) in above-mentioned two kinds of images respectively input pulse coupled neural network model, corresponding Output rusults image is obtained.
5) according to corresponding refinement result formula, Refinement operation is carried out for the result images exported, obtain final refinement result images.
Compared with prior art, beneficial effect of the present invention is:
The extraction of the printed text image of impressing for there being complexity has good application, computer technology is utilized to carry out digitized process to printed text image, carry out extraction and the comparison of printed text image, the incompleteness of printed text image under complex situations can be got rid of, the interference that texture thickness ground unrest that is inconsistent and that may exist brings to printed text image recognition.The extraction of neural network model and printed text image is combined, the outstanding fault-tolerant ability of neural network and stronger extensive and adaptive faculty can have been given play to, made the identification of seal have higher accuracy and efficiency.Because neural network has with brain similar, add the computing power that appliance computer is outstanding, the defect existed in traditional artificial seal identification can be overcome, obtain the result of relative ideal.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the printed text extracting method that the present invention is based on PCNN.
Embodiment
The technological means realized to make the present invention, creation characteristic, reaching object and effect is easy to understand, below in conjunction with accompanying drawing and instantiation, setting forth the present invention further.
In FIG, Image1 and Image2 is two width bianry images, when n-th igniting, uses O respectively 1(n), O 2n () represents output.The result formula of refinement is:
R(n)←R(n-1)∨((O 1(n)∧O 2(n))∨(O 1(n-1)∧O 2(n))) (2)
In fact Image1 is that original image is obtained by empty filling algorithm, Image2 be then by Image1 and binaryzation after image subtraction obtain.Final result is obtained by PCNN neural network model.In condition precedent, require it is closed due to image to be operated, the operation of so morphologic corrosion and expansion is just necessary.
After the skeleton obtaining seal, by the image of the skeletal extraction of seal image and the binary image of printed text, the image carried out after corrosion and expansive working on morphology merges, and can obtain a new fused images.
The main principle of bending moment comes from several squares of the insensitive fundamental region of image conversion as shape facility.Not bending moment not what definition in the physical significance of image, definition just in pure mathematics, when a width consecutive image, supposes that the function of image is f (x, y), so the p+q rank geometric moment (i.e. standard square) of image just can be defined as:
m pq = &Integral; - &infin; &infin; &Integral; - &infin; &infin; x p y q f ( x , y ) dxdy , p , q = 0,1,2 . . . . - - - ( 3 )
The centre distance of p+q can be defined as
&mu; pq = &Integral; - &infin; &infin; &Integral; - &infin; &infin; ( x - x &OverBar; ) p ( y - y &OverBar; ) q f ( x , y ) dxdy , p , q = 0,1,2 . . . . - - - ( 4 )
Wherein with the center of gravity of y representative image.
x &OverBar; = m 10 m 00 - - - ( 5 )
y &OverBar; = m 01 m 00 - - - ( 6 )
For the digital picture of discrete type, summation number can be adopted to replace integration:
m pq = &Sigma; y = 1 N &Sigma; x = 1 M x p y q f ( x , y ) , p , q = 0,1,2 . . . . - - - ( 7 )
&mu; pq = &Sigma; y = 1 N &Sigma; x = 1 M ( x - x &OverBar; ) p ( y - y &OverBar; ) q f ( x , y ) dady , p , q = 0,1,2 . . . . - - - ( 8 )
Here we are height and the width of image to their definition to N and M;
Center square after normalization can be defined as:
&eta; pq = &mu; pq / ( &mu; 00 &rho; ) - - - ( 9 )
Wherein &rho; = ( p + q ) 2 + 1 - - - ( 10 )
Second order and three normalization center, rank squares are utilized to construct 7 not bending moment M1-M7:
M1=η 2002(11)
M2=(η 2002) 2+4η 11 2(12)
M3=(η 30-3η 12) 2+(3η 2103) 2(13)
M4=(η 3012) 2+(η 2103) 2(14)
M5=(η 30-3η 12)(η 3012)((η 3012) 2-3(η 2103) 2)+(3η 2103)(η 2103)(3(η 3012) 2-3(η 2103) 2)
(15)
M6=(η 2002)((η 3012) 2-(η 2103) 2)+4η 113012)(η 2103)
(16)
M7=(3η 2103)(η 3012)((η 3012) 2-3(η 2103) 2)-(η 30-3η 12)(η 3012)(3(η 3012) 2-(η 2103))
(17)
These 7 not bending moment form a stack features amount, Hu.M.K demonstrated them in 1962 and has rotation, zooming and panning unchangeability.Because every width picture has the square of oneself, so for the coupling of two width images, only need several squares of comparison two width image whether to mate and just can reach the whether identical object of discriminating image.In the application of reality, it is relatively good that the unchangeability of M1 and M2 keeps, and other several not bending moments then have the larger error of existence.By Hu not bending moment image is identified, the velocity ratio that advantage is identifying is very fast, matching process is uncomplicated, but discrimination is lower, therefore for Hu not bending moment use we be generally all in the enterprising row operation of low-order moment, can by for same regular seal print off the statistical computation of difference of Hu square between the printed text image that comes and masterplate printed text image, can obtain when this difference is less than certain threshold value time, whether just can study and judge two width images is come from same piece of seal lid identical image out, just can identify the true and false of printed text image to be identified whereby.
The above, be only best mode for carrying out the invention, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses, and the simple change of the technical scheme that can obtain apparently or equivalence are replaced and all fallen within the scope of protection of the present invention.

Claims (1)

1. a printed text extracting method of Based PC NN, is characterized in that, comprise the following steps:
1) original printed text image is carried out to the process of binaryzation, obtain the binaryzation printed text image of better process;
2) continuous print corrosion and expansive working are carried out to pending binaryzation printed text image, reinvent image;
3) padding is carried out for the image reinvented, obtain blank map as Image1, the figure image subtraction blank map picture reinvented is obtained pending another image Image2;
4) in above-mentioned two kinds of images respectively input pulse coupled neural network model, corresponding Output rusults image is obtained;
5) according to corresponding refinement result formula, Refinement operation is carried out for the result images exported, obtain final refinement result images.
CN201410746225.1A 2014-12-09 2014-12-09 Seal inscription extraction method based on PCNN Pending CN104463171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410746225.1A CN104463171A (en) 2014-12-09 2014-12-09 Seal inscription extraction method based on PCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410746225.1A CN104463171A (en) 2014-12-09 2014-12-09 Seal inscription extraction method based on PCNN

Publications (1)

Publication Number Publication Date
CN104463171A true CN104463171A (en) 2015-03-25

Family

ID=52909185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410746225.1A Pending CN104463171A (en) 2014-12-09 2014-12-09 Seal inscription extraction method based on PCNN

Country Status (1)

Country Link
CN (1) CN104463171A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559309A (en) * 2018-11-30 2019-04-02 电子科技大学 Based on the multiple-objection optimization thermal-induced imagery defect characteristic extracting method uniformly evolved

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576956A (en) * 2009-05-11 2009-11-11 天津普达软件技术有限公司 On-line character detection method based on machine vision and system thereof
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576956A (en) * 2009-05-11 2009-11-11 天津普达软件技术有限公司 On-line character detection method based on machine vision and system thereof
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MING-KUEI HU: "Visual Pattern Recognition by Moment Invariants", 《IRE TRANSACTIONS ON INFORMATION THEORY》 *
尚利峰: "脉冲耦合神经网络在图像处理中的应用", 《中国优秀硕士学位论文全文数据库》 *
张儒良等: "一种基于Hu不变矩的匹配演化算法", 《西南师范大学学报》 *
毛华等: "基于PCNN图像分割与边缘匹配的支票验证方法", 《计算机工程与科学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559309A (en) * 2018-11-30 2019-04-02 电子科技大学 Based on the multiple-objection optimization thermal-induced imagery defect characteristic extracting method uniformly evolved
CN109559309B (en) * 2018-11-30 2021-03-30 电子科技大学 Multi-objective optimization infrared thermal image defect feature extraction method based on uniform evolution

Similar Documents

Publication Publication Date Title
JP6395158B2 (en) How to semantically label acquired images of a scene
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN104899921A (en) Single-view video human body posture recovery method based on multi-mode self-coding model
CN107527054B (en) Automatic foreground extraction method based on multi-view fusion
CN109961416B (en) Business license information extraction method based on morphological gradient multi-scale fusion
CN113111758B (en) SAR image ship target recognition method based on impulse neural network
CN109711411B (en) Image segmentation and identification method based on capsule neurons
CN104504007A (en) Method and system for acquiring similarity degree of images
CN104463122A (en) Seal recognition method based on PCNN
CN106529378A (en) Asian human face age characteristic model generating method and aging estimation method
CN115471423A (en) Point cloud denoising method based on generation countermeasure network and self-attention mechanism
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
CN113657387A (en) Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network
CN104036242A (en) Object recognition method based on convolutional restricted Boltzmann machine combining Centering Trick
CN117115563A (en) Remote sensing land coverage classification method and system based on regional semantic perception
Salem et al. A novel face inpainting approach based on guided deep learning
CN104463171A (en) Seal inscription extraction method based on PCNN
Ling et al. A facial expression recognition system for smart learning based on YOLO and vision transformer
CN107766838B (en) Video scene switching detection method
CN113657375B (en) Bottled object text detection method based on 3D point cloud
Zemin et al. Image classification optimization algorithm based on SVM
Wu et al. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder
CN106778789A (en) A kind of fast target extracting method in multi-view image
Yao et al. Facial expression recognition method based on convolutional neural network and data enhancement
CN108229501B (en) Sketch recognition method fusing time sequence of texture features and shape features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325