CN102147867B - Method for identifying traditional Chinese painting images and calligraphy images based on subject - Google Patents

Method for identifying traditional Chinese painting images and calligraphy images based on subject Download PDF

Info

Publication number
CN102147867B
CN102147867B CN 201110131873 CN201110131873A CN102147867B CN 102147867 B CN102147867 B CN 102147867B CN 201110131873 CN201110131873 CN 201110131873 CN 201110131873 A CN201110131873 A CN 201110131873A CN 102147867 B CN102147867 B CN 102147867B
Authority
CN
China
Prior art keywords
sample image
image
traditional chinese
chinese painting
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110131873
Other languages
Chinese (zh)
Other versions
CN102147867A (en
Inventor
鲍泓
潘卫国
何宁
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN 201110131873 priority Critical patent/CN102147867B/en
Publication of CN102147867A publication Critical patent/CN102147867A/en
Application granted granted Critical
Publication of CN102147867B publication Critical patent/CN102147867B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for identifying traditional Chinese painting images and calligraphy images based on a subject, comprising the following steps: scanning the traditional Chinese painting works prior to modern Chinese history and calligraphy works in history to obtain sample images of the traditional Chinese painting works and alligraphy works; carrying out Top-down-based image pre-processing on the sample images and randomly selecting training sample images and test sample images from the scanned sample images; extracting subject feature vectors of the training sample images from the training sample images and training a grader; and training a sample image grader, extracting the subject feature vectors from the test sample images and using the well trained sample image grader to carry out identification, thus obtaining the identification results. The method provided by the invention is based on the concept of the subject, can be used for extracting the subject features of the traditional Chinese painting images and calligraphy images and realizing identification of the traditional Chinese painting images and calligraphy images, thus the method can be widely applied to identification and search of such images.

Description

A kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image
Technical field
The present invention relates to a kind of image-recognizing method, particularly about a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image.
Background technology
Traditional Chinese Painting works and calligraphy work are the important component parts in the Chinese culture, have unique art form, in fine arts field, the world, establish one's own system, and take the course of its own.Therefore in recent years, along with technology rapid development such as computing machine and multimedias, the digital picture of traditional Chinese Painting works and calligraphy work also grows with each passing day, and how effectively to retrieve the traditional Chinese Painting image and handwriting image more and more receives everybody concern.
Traditional Chinese Painting image and handwriting image mainly comprise and stay white region and main scene area; Characteristics to traditional Chinese Painting image and handwriting image self; Through the data pre-service staying white region to be set at the characteristic that remaining main scene area more can be given prominence to traditional Chinese Painting image and handwriting image after the unified color, so main body that to define this main scene area be traditional Chinese Painting image and handwriting image.In the present existing image-recognizing method, have for the extraction of characteristics of image: based on the feature extraction of vision noticing mechanism target area, directly use global image to carry out feature extraction, the overall situation combines local feature extraction etc.
Existing research work mainly concentrates on the creation style of traditional Chinese Painting image such as Chinese realistic painting, freehand brushwork etc.; The identification of content such as scenery with hills and waters, personage, birds and flowers etc.Jiang Shu waited the people to propose a kind of detection and Identification method that effectively is directed against the traditional Chinese painting image by force in 2006, and this method is at first come out the traditional Chinese Painting separation of images from general pattern, and then the traditional Chinese Painting image is carried out Chinese realistic painting and two enjoyable classification; The people such as Jana Zujovic of Northwestern Univ USA in 2009 have proposed a kind of by the school of art classification algorithms; With CBIR CBIR (Content Based Image Retrieval) system class seemingly, this algorithm mainly also is to utilize texture and color to carry out feature extraction; The Jia Li of Univ Pennsylvania USA and James Z.Wang have proposed a kind of based on the image-recognizing method that mixes probabilistic model; Use 2-d wavelet multiresolution hidden Markov model in this method; The traditional Chinese painting image is discerned by writer; Chosen Shen week on the Chinese history, Dong Qichang, Gao Fenghan, Wu Changshuo, five artists' of Zhang Daqian works in the test, writer under it has been discerned.
Said method all is the identification that concentrates on the traditional Chinese Painting image, does not find at present the method for traditional Chinese Painting image and handwriting image being discerned based on main body as yet.
Summary of the invention
To the problems referred to above, the purpose of this invention is to provide a kind of notion based on main body, through extracting the characteristic of traditional Chinese Painting image and handwriting image main body, realize method to traditional Chinese Painting image and handwriting image identification.
For realizing above-mentioned purpose; The present invention takes following technical scheme: a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image; It comprises the steps: that (1) utilizes scanner scanning China traditional Chinese Painting works and calligraphy work that occurred in history before modern age, obtains the sample image of traditional Chinese Painting works and calligraphy work; (2) sample image is carried out the image pre-service based on Top-down, 1. it comprise the steps: sample image from the RGB color space conversion to the hsv color space; The sample image in the hsv color space that 2. 1. step is obtained is done the rim detection of Canny operator; The sample image of the rim detection that 3. 2. step is obtained is done the processing of edge swell; The sample image of the edge swell that 4. 3. step is obtained is done the processing that fill in the zone; The statistics of colouring information is carried out in background area beyond the fill area of the sample image that fill in the zone that 5. 4. step is obtained, comprises each component in the hsv color space is added up, and draws mean value Ave_H, Ave_S, the Ave_V of each component; 6. the sample image that preliminary sweep is obtained carries out the traversal of full figure individual element; The value of the H in the hsv color space of each pixel of sample image, S, each component of V respectively with the hsv color space in mean value Ave_H, Ave_S, the Ave_V of each component do the difference computing; Difference operation result and threshold value are compared; Pixel in threshold range thinks to stay white region, is set to be unified color; (3) picked at random training sample image and test sample image from the sample image that scanning obtains; (4) the body feature vector of extraction training sample image from training sample image, and training classifier, it comprises the steps: 1. to pass through the grey level histogram that step (2) image pre-service obtains training sample image, and gray scale is 256 rank; 2. count the number of times summation Total that in training sample image, occurs for bin between each chromatic zones in the training sample image grey level histogram; The last body feature vector that generates one 256 dimension of each training sample image; Accomplish the extraction of the body feature vector of training sample image; Consider the size of different training sample image, adopt following formula to calculate:
Total _ bin = Total Wide × High ,
Wherein, Wide, High represent the wide and high of training sample image respectively;
(5) training sample image sorter; From test sample image, extract the body feature vector; The sample image sorter that utilization trains is discerned; 1. it comprise the steps: the body feature vector to the training sample image of extracting, and trains the sample image sorter that obtains training based on machine learning model; 2. extract the body feature vector of test sample image; 3. for the body feature vector of the test sample image of extracting, the sample image sorter that utilization trains is discerned, and draws the result of identification.
Said step (3) picked at random training sample image and test sample image from the sample image that scanning obtains comprise the steps 1. to define the sample image classification, are numbered 1 or 0,1 expression traditional Chinese Painting sample image, 0 expression calligraphy sample image; 2. being used for sample image to be identified is I, is labeled as { I 1, I 2, I wherein 1Expression calligraphy sample image is designated as I 1={ C 1, C 2... C n, C i(i=1 2...n) is expressed as the calligraphy sample image that scanning obtains, I 2Expression traditional Chinese Painting sample image is designated as I 2={ P 1, P 2... P n, P i(i=1 2...n) is expressed as the traditional Chinese Painting sample image that scanning obtains; 3. respectively from I 1, I 2The sample image that middle picked at random is set quantity is designated as { I as training sample image collection T 1', I 2', I 1' expression calligraphy training sample image, I 2' expression traditional Chinese Painting training sample image is with I 1, I 2Middle samples remaining image is as the test sample image collection
Figure BDA0000062542250000022
e i(i=1 2...m) is test sample image.
In the said step (5) 1. to train the algorithm of employing based on machine learning model be decision Tree algorithms, artificial neural network, algorithm of support vector machine, a kind of in the Bayesian learning algorithm.
The present invention is owing to take above technical scheme, and it has the following advantages: 1, the present invention is based on the notion of main body, through extracting the characteristic of traditional Chinese Painting image and handwriting image main body, realized the identification to traditional Chinese Painting image and handwriting image.2, the present invention is through the pre-service to traditional Chinese Painting image and handwriting image; Realized the white region that stays of traditional Chinese Painting image and handwriting image background is handled; Make outstanding the manifesting of characteristic of traditional Chinese Painting image and each autonomous agent of handwriting image like this, help feature extraction traditional Chinese Painting image and handwriting image main body.3, the present invention can ignore the influence of the size of traditional Chinese Painting image and handwriting image to the body feature vector when traditional Chinese Painting image and handwriting image body feature are extracted.Therefore, the present invention can be widely used in the identification to traditional Chinese Painting image and handwriting image.
Description of drawings
Fig. 1 is a processing flow chart of the present invention
Fig. 2 is the sample example of employed traditional Chinese Painting image among the present invention
Fig. 3 is the sample example of employed handwriting image among the present invention
Fig. 4 is the pretreated process flow diagram of sample image of the present invention
Fig. 5 is traditional Chinese Painting Image Edge-Detection of the present invention figure as a result
Fig. 6 is handwriting image edge detection results figure of the present invention
Fig. 7 is the figure as a result after expanding in traditional Chinese Painting of the present invention image border
Fig. 8 is the figure as a result after the handwriting image edge swell of the present invention
Fig. 9 is the figure as a result after traditional Chinese Painting image-region of the present invention is filled
Figure 10 is the figure as a result after fill in handwriting image of the present invention zone
Figure 11 is the figure as a result of traditional Chinese Painting image after the pre-service of the present invention's process image
Figure 12 is the figure as a result of handwriting image after the pre-service of the present invention's process image
Figure 13 is the grey level histogram of traditional Chinese Painting image of the present invention
Figure 14 is the grey level histogram of handwriting image of the present invention
Figure 15 is the grey level histogram of the present invention through pretreated traditional Chinese Painting image
Figure 16 is the grey level histogram of the present invention through pretreated handwriting image
Embodiment
Below in conjunction with accompanying drawing and embodiment the present invention is carried out detailed description.
As shown in Figure 1, the recognition methods of traditional Chinese Painting image of the present invention and handwriting image comprises the steps:
1, scanning China's traditional Chinese Painting works and calligraphy work that occurred in history before modern age obtains the sample image of traditional Chinese Painting works and calligraphy work.
Like Fig. 2, shown in Figure 3, sample image of the present invention obtains through Epson Expression10000XL scanner scanning.
2, sample image is carried out the data pre-service, extract the characteristic of sample image main body.
Sample image mainly comprises and stays white region and main scene area; Characteristics to sample image self; It is after the unified color that white region is stayed in setting, and remaining main scene area more can be given prominence to the characteristic of sample image, the main body that therefore to define this main scene area be sample image.
Owing to reason of the remote past; Sample image stay white region variable color mostly; These information can be extracted the body feature of sample image and cause interference; The pretreated purpose of image is set at unified color to the white region that stays of sample image, reduces the interference that the sample image body feature is extracted.
As shown in Figure 4, the present invention carries out the image pre-service based on Top-down to sample image, and the process of its processing comprises the steps:
(1) at first the color space of sample image is transformed into HSV (hue, saturation, intensity color space) from RGB (red, green, blue color space).The RGB color space is not an even color space, and the distance on the RGB color space can not be represented the visual color similarity of human eye, though this representation is simple bigger with the sensory difference of human eye.Handling the appropriate to the occasion hsv color space of choosing of color characteristic, the hsv color space is by tone H, saturation degree S, and three components of brightness V are formed; More approaching with human vision property, wherein tone H representes different colours, as red, orange, green; Its strength component value scope is 0~360, and saturation degree S representes the depth of color, and its strength component value scope is 0~1; Brightness V representes the bright-dark degree of color, influenced by the light source power, measures with number percent usually; Its strength component value scope is 0% to 100%, and wherein black is 0%, is 100% in vain.The hsv color model is with Munsell (Meng Saier) three dimensions system representation; Variation that can feeling of independence fundamental color component; And this color space has linear extendible property; The Euclidean distance of the point of coordinate is proportional on appreciable colour-difference and the hsv color space, and the computing formula from the RGB color space conversion to the hsv color space is following:
H = arccos ( R - G ) + ( R - B ) 2 ( R - G ) 2 + ( R - B ) ( G - B ) ( B ≤ G ) 2 π - arccos ( R - G ) + ( R - G ) 2 ( R - G ) 2 + ( R - G ) ( G - B ) ( B > G ) ,
S = max ( R , G , B ) - min ( R , G , B ) max ( R , G , B ) ,
V = max ( R , G , B ) 255 .
In the formula, R, G, B represent the strength component value of the red, green, blue of each pixel in the sample image respectively.
(2) like Fig. 5, shown in Figure 6, make the rim detection of Canny operator on the V component of sample image in the hsv color space, comprise the steps:
1. utilize the level and smooth sample image of Gaussian filter, two-dimensional Gaussian function is:
G ( x , y ) = 1 2 π δ 2 exp ( - ( x 2 + y 2 ) 2 δ 2 )
In the formula, δ representes that variance is the parameter of Gaussian filter, and it is controlling level and smooth degree, and x, y are the coordinates that generates gaussian mask, calculate suitable mask with this formula, realizes that with the standard convolution Gauss is level and smooth, and the gaussian mask that calculates is shown in the following figure:
2 4 5 4 2
4 9 12 9 4
5 12 15 12 5
4 9 12 9 4
2 4 5 4 2
2. use the Sobel gradient operator to calculate the gradient estimated value of each pixel.
The Sobel gradient operator has two 3 * 3 convolution kernel: G xGradient component for horizontal direction; G yBe the gradient component of vertical direction, its computing formula is following:
G x = - 1 0 1 - 2 0 2 - 1 0 1 G y = 1 0 1 0 0 0 - 1 - 2 - 1 ,
Gradient magnitude or edge strength computing formula are:
|G|=|G x|+|G y|。
3. if the gradient component G of horizontal direction xGradient component G with vertical direction yKnown, the deflection computing formula is:
θ=arctan(G y/G x),
If the gradient component G of horizontal direction xBe 0, deflection depends on the gradient component G of vertical direction y:
Figure BDA0000062542250000053
4. each pixel has only 4 possible directions to link to each other with neighbor pixel in the sample image: 0 ° (horizontal direction), and 45 ° (over against angular direction), 90 ° (vertical direction), 135 ° (negative diagonal), deflection is arrived following 4 angles by standard:
0°∶0°~22.5°,157.5°~180°;45°∶22.5°~67.5°;
90°∶67.5°~112.5°;135°∶112.5°~157.5°。
If 5. the Grad on the deflection direction of the pixel in the sample image is maximum, then keep, otherwise this pixel is removed, the set that the maximum point of all pixels of sample image Grad on the deflection direction constitutes is the set of possible marginal point.
6. set two Grads threshold, a high threshold TH, a low threshold value TL, high threshold TH generally get 2~3 times of low threshold value TL.From the set of possible marginal point, remove the pixel of Grad earlier less than high threshold TH; Get marginal point set F; Handle the pixel set M of Grad between high and low threshold value again; Face a little if the point among the M is put to have among the set F on the edge of, then this point is added marginal point set F, the marginal point that finally obtains set F is exactly the marginal point set of sample image.
(3) like Fig. 7, shown in Figure 8, the sample image of the above-mentioned rim detection that obtains is carried out expansion process, comprise the steps:
Suppose that A is the edge of the above-mentioned sample image that obtains; B is that (why select this structural element for use, be because through experiment, draw under this situation to one 4 * 4 the structural element of stating; Recognition result is best), A is followed following formula as edge swell:
A ⊕ B = { z | [ ( B ^ ) z ∩ A ] ⊆ A } ,
In the formula, A and B are z 2Set in (two-dimentional integer space), z is one of them element,
Figure BDA0000062542250000062
Be meant expansive working,
Figure BDA0000062542250000063
The reflection of expression B moves to a z=(z 1, z 2).
(4) like Fig. 9, shown in Figure 10, the above-mentioned sample image that obtains edge swell to be carried out the zone fill, the zone filling mainly is to be the basis with expansion, supplement and the common factor of gathering, the formula that fill in the zone is following:
X k = ( X k - 1 ⊕ B ) ∩ A c ( k = 1,2,3 , . . . ) ,
In the formula, A cThe supplementary set of expression A, X K-1Be a bit in the fill area, k is the step number of algorithm iteration.
(5) like Figure 11, shown in Figure 12, handle the background area of the sample image that the above-mentioned zone that obtains is filled.White region is thought to stay in background area for beyond the fill area in the sample image, stays white region to be designated as I_B, and statistics is stayed the colouring information of white region, mainly adds up the mean value of each component in the hsv color space: Ave_H, Ave_S, Ave_V, and formula is following:
Ave _ H = 1 k Σ i = 1 k h k ( h k ∈ I _ B ) ,
Ave _ S = 1 k Σ i = 1 k s k ( s k ∈ I _ B ) ,
Ave _ V = 1 k Σ i = 1 k v k ( v k ∈ I _ B ) ,
In the formula, h k, s k, v kThe strength component value of representing tone, saturation degree and brightness respectively.
The sample image that preliminary sweep is obtained carries out the traversal of full figure individual element; The value of the H in the hsv color space of each pixel of sample image, S, each component of V respectively with the hsv color space in mean value Ave_H, Ave_S, the Ave_V of each component do the difference computing; Difference operation result and threshold value T_P are compared (this threshold value T_P draws through experiment, and threshold range is between 0.15~0.2), think to stay white region for the pixel within threshold value T_P scope; It is unified color that white region is stayed in setting; Be set at white (as example, being not limited thereto) here, its computing formula is:
Figure BDA0000062542250000071
Wherein, white is meant and is set to white that unchange is meant that the original sample image of maintenance is constant; I_pex representes each pixel in the sample image; I_pex_h, i_pex_s, i_pex_v represent it is H, the S on the hsv color space on each pixel of sample image, the value of each component of V.
Like Figure 13~shown in Figure 16, the present invention is from traditional Chinese Painting works and calligraphy work creative feature, and promptly calligraphy work is comparatively even with China ink, and the traditional Chinese Painting works want stereovision with China ink; Through after the above-mentioned data pre-service; Make that the main body of sample image is more outstanding, wherein, horizontal ordinate is represented the exponent number 0~255 of gray scale in grey level histogram; The gray scale exponent number is totally 256 rank, and ordinate is the number that the pixel in the statistical sample image occurs at certain gray scale exponent number.
3, from sample image, choose training sample image and test sample image at random.
Sample image is divided into training sample image and test sample image, and training sample image and test sample image labeling method comprise the steps: (1) definition sample image classification, are numbered 1 or 0,1 expression traditional Chinese Painting sample image, 0 expression calligraphy sample image; (2) supposing to be used for sample image to be identified is I, is labeled as { I 1, I 2, I wherein 1Expression calligraphy sample image is designated as I 1={ C 1, C 2... C n, C i(i=1 2...n) is expressed as the calligraphy sample image that scanning obtains, I 2Expression traditional Chinese Painting sample image is designated as I 2={ P 1, P 2... P n, P i(i=1 2...n) is expressed as the traditional Chinese Painting sample image that scanning obtains; (3) respectively from I 1, I 2The sample image that middle picked at random is set quantity is designated as { I as training sample image collection T 1', I 2', I 1' expression calligraphy training sample image, I 2' expression traditional Chinese Painting training sample image is with I 1, I 2Middle samples remaining image is as the test sample image collection e i(i=1 2...m) is test sample image.
4, accomplish on the pretreated basis of sample image, training sample image is carried out body feature extract, and training classifier.The process that the training sample image body feature is extracted is following: the grey level histogram that training sample image is obtained training sample image through above-mentioned image pre-service; Gray scale is 256 rank; Count the number of times summation Total that occurs at sample image for bin between each chromatic zones in the grey level histogram, the last body feature vector that generates one 256 dimension of each training sample image.Consider the size of different training sample image, adopt following formula to calculate among the present invention:
Total _ bin = Total Wide × High ,
Wherein, Wide and High represent the wide and high of training sample image respectively.
The present invention adopts the recognition methods of SVMs (as example, being not limited thereto) that training sample image is trained, and after training, obtains the model model of a sample image sorter.The kit that this experiment adopts LIBSVM to provide makes an experiment, and following function model capable of using is represented:
model=svmtrain(T_F,label,options)
In the above-mentioned function call, svmtrain is the SVMs computing, the body feature vector of the training sample image that T_F representes to extract; Label representes the class label of corresponding training sample image, and value 0 or 1 is represented calligraphy sample image and traditional Chinese Painting sample image respectively here; Options is that parameter is selected; Parameter options='-t2-s0-b1-c1 ' for example, the implication of expression is that kernel function is intersection kernel, the SVM type is C-svc; The C-svc penalty coefficient is 1, and needs probability estimate.
5, from test sample image, extract the body feature vector of test sample image, and discern, accomplish the identification of test sample image, comprise the steps: with the sample image sorter that trains
(1) test sample image is carried out the data pre-service; (2) to through the pretreated test sample image of data, carry out body feature and extract, generate the body feature vector of test sample image; (3) the body feature vector of test sample image is imported the sample image sorter that trains, obtain recognition result.
The recognition methods that recognition result demonstration test of the present invention adopts is that SVMs is (as example; Be not limited thereto) in the MatlabR2008A software platform; Obtain the predict the outcome pre and the accuracy rate acc of test sample image, SVMs uses and handles with minor function:
[pre?acc]=svmpredict(label_1,H_F,model,‘-b?1’),
In the above-mentioned function call, svmpredict is an anticipation function, and label_1 is the class label of test sample image, and H_F is the body feature vector that test sample image generates, and model is the sample image sorter that trains.
The result of identification can adopt following formula:
P = n _ R N _ Total ,
In the formula, n_R is the number of the test sample image that identifies, and N_Total is the number of test sample image.
The present invention is through verifying explanation to the test findings of following traditional Chinese Painting image and handwriting image; Testing employed sample image obtains through scanning " Chinese painting complete or collected works " and " complete or collected works of Chinese calligraphy "; Make it as the sample image storehouse; Then therefrom at random choose training sample image and test sample image, as shown in the table:
Figure BDA0000062542250000082
The result who obtains through test is following:
P = n _ R N _ Total = 238 240 = 0.992 .
The above results shows, utilizes image-recognizing method of the present invention to obtain very desirable recognition result, helps the mark and the retrieval of traditional Chinese Painting image and handwriting image.
Above-mentioned each embodiment only is used to explain the present invention; Each step all can change to some extent; On the basis of technical scheme of the present invention, all improvement and equivalents of individual steps and proportioning being carried out according to the principle of the invention all should not got rid of outside protection scope of the present invention.

Claims (3)

1. one kind based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image, and it comprises the steps:
(1) utilizes scanner scanning China traditional Chinese Painting works and calligraphy work that occurred in history before modern age, obtain the sample image of traditional Chinese Painting works and calligraphy work;
(2) sample image is carried out the image pre-service based on Top-down, it comprises the steps:
1. with sample image from the RGB color space conversion to the hsv color space;
The sample image in the hsv color space that 2. 1. step is obtained is done the rim detection of Canny operator;
The sample image of the rim detection that 3. 2. step is obtained is done the processing of edge swell;
The sample image of the edge swell that 4. 3. step is obtained is done the processing that fill in the zone;
The statistics of colouring information is carried out in background area beyond the fill area of the sample image that fill in the zone that 5. 4. step is obtained, comprises each component in the hsv color space is added up, and draws mean value Ave_H, Ave_S, the Ave_V of each component;
6. the sample image that preliminary sweep is obtained carries out the traversal of full figure individual element; The value of the H in the hsv color space of each pixel of sample image, S, each component of V respectively with the hsv color space in mean value Ave_H, Ave_S, the Ave_V of each component do the difference computing; Difference operation result and threshold value are compared; Pixel in threshold range thinks to stay white region, is set to be unified color;
(3) picked at random training sample image and test sample image from the sample image that scanning obtains;
(4) the body feature vector of extraction training sample image from training sample image, and training sample image sorter, it comprises the steps:
1. training sample image is carried out the image pre-service of step (2), and obtain the grey level histogram of pretreated training sample image, gray scale is 256 rank;
2. count the number of times summation Total that in pretreated training sample image, occurs for bin between each chromatic zones in the pretreated training sample image grey level histogram; The last body feature vector that generates one 256 dimension of each pretreated training sample image; Accomplish the extraction of the body feature vector of pretreated training sample image; Consider the size of different pretreated training sample image, adopt following formula to calculate:
Total _ bin = Total Wide × High ,
Wherein, Wide, High represent the wide and high of pretreated training sample image respectively;
(5) body feature of extraction test sample image is vectorial from test sample image, and discerns with the sample image sorter that trains, and accomplishes the identification of test sample image, comprises the steps:
1. test sample image is carried out the data pre-service;
2. to through the pretreated test sample image of data, carry out body feature and extract, generate the body feature vector of test sample image;
3. the body feature vector of test sample image is imported the sample image sorter that trains, obtain recognition result.
2. as claimed in claim 1 a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image; It is characterized in that: said step (3) picked at random training sample image and test sample image from the sample image that scanning obtains comprise the steps 1. to define the sample image classification; Be numbered 1 or 0; 1 expression traditional Chinese Painting sample image, 0 expression calligraphy sample image; 2. sample image to be identified is I, is labeled as { I 1, I 2, I wherein 1Expression calligraphy sample image is designated as I 1={ C 1, C 2... C n, C i(i=1 2...n) is expressed as the calligraphy sample image that scanning obtains, I 2Expression traditional Chinese Painting sample image is designated as I 2={ P 1, P 2... P n, P i(i=1 2...n) is expressed as the traditional Chinese Painting sample image that scanning obtains; 3. respectively from I 1, I 2The sample image that middle picked at random is set quantity is designated as { I as training sample image collection T 1', I 2', I 1' expression calligraphy training sample image, I 2' expression traditional Chinese Painting training sample image is with I 1, I 2Middle samples remaining image is as the test sample image collection e i(i=1 2...m) is test sample image.
3. according to claim 1 or claim 2 a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image; It is characterized in that: in the said step (5) 1. to train the algorithm of employing based on machine learning model be decision Tree algorithms; Artificial neural network; Algorithm of support vector machine, a kind of in the Bayesian learning algorithm.
CN 201110131873 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject Expired - Fee Related CN102147867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110131873 CN102147867B (en) 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110131873 CN102147867B (en) 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject

Publications (2)

Publication Number Publication Date
CN102147867A CN102147867A (en) 2011-08-10
CN102147867B true CN102147867B (en) 2012-12-12

Family

ID=44422124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110131873 Expired - Fee Related CN102147867B (en) 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject

Country Status (1)

Country Link
CN (1) CN102147867B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842046B (en) * 2012-08-07 2015-09-23 天津大学 A kind of calligraphic style recognition methods of extracting based on global characteristics and training
CN103336942A (en) * 2013-04-28 2013-10-02 中山大学 Traditional Chinese painting identification method based on Radon BEMD (bidimensional empirical mode decomposition) transformation
CN103336943B (en) * 2013-06-04 2016-06-08 广东药学院 For judging animal-feed is added the microscopic image identification method of medicine
CN103955718A (en) * 2014-05-15 2014-07-30 厦门美图之家科技有限公司 Image subject recognition method
CN104780465B (en) * 2015-03-25 2018-09-04 小米科技有限责任公司 Frame parameter adjusting method and device
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
CN106372656B (en) * 2016-08-30 2019-05-10 同观科技(深圳)有限公司 Obtain method, image-recognizing method and the device of the disposable learning model of depth
CN110877019A (en) * 2018-09-05 2020-03-13 西门子(中国)有限公司 Traditional Chinese medicinal material impurity removing device and method
CN110427990B (en) * 2019-07-22 2021-08-24 浙江理工大学 Artistic image classification method based on convolutional neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101334835A (en) * 2008-07-28 2008-12-31 上海高德威智能交通系统有限公司 Color recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007206920A (en) * 2006-02-01 2007-08-16 Sony Corp Image processor and image processing method, retrieving device and method, program and recording medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101334835A (en) * 2008-07-28 2008-12-31 上海高德威智能交通系统有限公司 Color recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜雅娟.国画特征提取与分类算法的研究.《中国优秀硕士学位论文全文数据库》.2008,全文. *

Also Published As

Publication number Publication date
CN102147867A (en) 2011-08-10

Similar Documents

Publication Publication Date Title
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN105261017B (en) The method that image segmentation based on road surface constraint extracts pedestrian's area-of-interest
CN105046196B (en) Front truck information of vehicles structuring output method based on concatenated convolutional neutral net
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
Fritsch et al. Monocular road terrain detection by combining visual and spatial information
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN103049763B (en) Context-constraint-based target identification method
CN101673338B (en) Fuzzy license plate identification method based on multi-angle projection
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN103793708B (en) A kind of multiple dimensioned car plate precise positioning method based on motion correction
CN106156684B (en) A kind of two-dimensional code identification method and device
CN105069466A (en) Pedestrian clothing color identification method based on digital image processing
CN104574375A (en) Image significance detection method combining color and depth information
CN102521616B (en) Pedestrian detection method on basis of sparse representation
CN113408584B (en) RGB-D multi-modal feature fusion 3D target detection method
CN102024156A (en) Method for positioning lip region in color face image
CN103984963B (en) Method for classifying high-resolution remote sensing image scenes
CN112417931B (en) Method for detecting and classifying water surface objects based on visual saliency
CN103186790A (en) Object detecting system and object detecting method
CN102147812A (en) Three-dimensional point cloud model-based landmark building image classifying method
CN110390228A (en) The recognition methods of traffic sign picture, device and storage medium neural network based
CN104102904A (en) Static gesture identification method
Meng et al. Text detection in natural scenes with salient region
CN105426924A (en) Scene classification method based on middle level features of images
Li et al. The research on traffic sign recognition based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121212

Termination date: 20150520

EXPY Termination of patent right or utility model