CN102147867A - Method for identifying traditional Chinese painting images and calligraphy images based on subject - Google Patents

Method for identifying traditional Chinese painting images and calligraphy images based on subject Download PDF

Info

Publication number
CN102147867A
CN102147867A CN 201110131873 CN201110131873A CN102147867A CN 102147867 A CN102147867 A CN 102147867A CN 201110131873 CN201110131873 CN 201110131873 CN 201110131873 A CN201110131873 A CN 201110131873A CN 102147867 A CN102147867 A CN 102147867A
Authority
CN
China
Prior art keywords
sample image
image
traditional chinese
chinese painting
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110131873
Other languages
Chinese (zh)
Other versions
CN102147867B (en
Inventor
鲍泓
潘卫国
何宁
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN 201110131873 priority Critical patent/CN102147867B/en
Publication of CN102147867A publication Critical patent/CN102147867A/en
Application granted granted Critical
Publication of CN102147867B publication Critical patent/CN102147867B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for identifying traditional Chinese painting images and calligraphy images based on a subject, comprising the following steps: scanning the traditional Chinese painting works prior to modern Chinese history and calligraphy works in history to obtain sample images of the traditional Chinese painting works and alligraphy works; carrying out Top-down-based image pre-processing on the sample images and randomly selecting training sample images and test sample images from the scanned sample images; extracting subject feature vectors of the training sample images from the training sample images and training a grader; and training a sample image grader, extracting the subject feature vectors from the test sample images and using the well trained sample image grader to carry out identification, thus obtaining the identification results. The method provided by the invention is based on the concept of the subject, can be used for extracting the subject features of the traditional Chinese painting images and calligraphy images and realizing identification of the traditional Chinese painting images and calligraphy images, thus the method can be widely applied to identification and search of such images.

Description

A kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image
Technical field
The present invention relates to a kind of image-recognizing method, particularly about a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image.
Background technology
Traditional Chinese Painting works and calligraphy work are the important component parts in the Chinese culture, have unique art form, establish one's own system in fine arts field, the world, take the course of its own.In recent years, along with technology rapid development such as computing machine and multimedias, the digital picture of traditional Chinese Painting works and calligraphy work also grows with each passing day, and therefore how effectively to retrieve traditional Chinese Painting image and handwriting image and more and more receives everybody concern.
Traditional Chinese Painting image and handwriting image mainly comprise and stay white region and main scene area, characteristics at traditional Chinese Painting image and handwriting image self, through the data pre-service staying white region to be set at the feature that remaining main scene area more can be given prominence to traditional Chinese Painting image and handwriting image after the unified color, therefore define this main scene area the main body that is traditional Chinese Painting image and handwriting image.In the present existing image-recognizing method, have for the extraction of characteristics of image: based on the feature extraction of vision noticing mechanism target area, directly use global image to carry out feature extraction, the overall situation is in conjunction with local feature extraction etc.
Existing research work mainly concentrates on the creation style of traditional Chinese Painting image such as Chinese realistic painting, freehand brushwork etc.; The identification of content such as scenery with hills and waters, personage, birds and flowers etc.Jiang Shu waited the people to propose a kind of effective detection and Identification method at the traditional Chinese painting image by force in 2006, and this method is at first come out the traditional Chinese Painting separation of images from general pattern, and then the traditional Chinese Painting image is carried out Chinese realistic painting and two enjoyable classification; The people such as Jana Zujovic of Northwestern Univ USA in 2009 have proposed a kind of by the school of art classification algorithms, with CBIR CBIR (Content Based Image Retrieval) system class seemingly, this algorithm mainly also is to utilize texture and color to carry out feature extraction; The Jia Li of Univ Pennsylvania USA and James Z.Wang have proposed a kind of based on the image-recognizing method that mixes probabilistic model, use 2-d wavelet multiresolution hidden Markov model in this method, the traditional Chinese painting image is discerned by writer, chosen the Shen week on the Chinese history, Dong Qichang, Gao Fenghan, Wu Changshuo, Zhang Daqian five artists' works in the test, writer under it has been discerned.
Said method all is the identification that concentrates on the traditional Chinese Painting image, does not find at present the method for traditional Chinese Painting image and handwriting image being discerned based on main body as yet.
Summary of the invention
At the problems referred to above, the purpose of this invention is to provide a kind of notion based on main body, by extracting the feature of traditional Chinese Painting image and handwriting image main body, realize method to traditional Chinese Painting image and handwriting image identification.
For achieving the above object, the present invention takes following technical scheme: a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image, its comprise the steps: (1) utilize scanner scanning China before modern age the traditional Chinese Painting works and the calligraphy work that occurred in history, obtain the sample image of traditional Chinese Painting works and calligraphy work; (2) sample image is carried out image pre-service based on Top-down, 1. it comprise the steps: sample image from the RGB color space conversion to the hsv color space; 2. the sample image in the hsv color space that 1. step is obtained is done the rim detection of Canny operator; 3. the sample image of the rim detection that 2. step is obtained is done the processing of edge swell; 4. the sample image of the edge swell that 3. step is obtained is done the processing that fill in the zone; 5. the statistics of colouring information is carried out in the background area beyond the fill area of the zone that 4. step the is obtained sample image of filling, and comprises each component in the hsv color space is added up, and draws mean value Ave_H, Ave_S, the Ave_V of each component; 6. the sample image that preliminary sweep is obtained carries out the traversal of full figure individual element, H, the S in the hsv color space of each pixel of sample image, the value of each component of V respectively with the hsv color space in mean value Ave_H, Ave_S, the Ave_V of each component do the difference computing, difference operation result and threshold value are compared, pixel in threshold range thinks to stay white region, is set to be unified color; (3) picked at random training sample image and test sample image from the sample image that scanning obtains; (4) the body feature vector of extraction training sample image from training sample image, and training classifier, it comprises the steps: 1. to pass through the grey level histogram that step (2) image pre-service obtains training sample image, and gray scale is 256 rank; 2. count the number of times summation Total that in training sample image, occurs for bin between each chromatic zones in the training sample image grey level histogram, the last body feature vector that generates one 256 dimension of each training sample image, finish the extraction of the body feature vector of training sample image, consider the size of different training sample image, adopt following formula to calculate:
Total _ bin = Total Wide × High ,
Wherein, Wide, High represent the wide and high of training sample image respectively;
(5) training sample image sorter, from test sample image, extract the body feature vector, the sample image sorter that utilization trains is discerned, 1. it comprise the steps: the body feature vector to the training sample image of extracting, train the sample image sorter that obtains training based on machine learning model; 2. extract the body feature vector of test sample image; 3. for the body feature vector of the test sample image of extracting, the sample image sorter that utilization trains is discerned, and draws the result of identification.
Described step (3) picked at random training sample image and test sample image from the sample image that scanning obtains comprise the steps 1. to define the sample image classification, are numbered 1 or 0,1 expression traditional Chinese Painting sample image, 0 expression calligraphy sample image; 2. being used for sample image to be identified is I, is labeled as { I 1, I 2, I wherein 1Expression calligraphy sample image is designated as I 1={ C 1, C 2... C n, C i(i=1 2...n) is expressed as the calligraphy sample image that scanning obtains, I 2Expression traditional Chinese Painting sample image is designated as I 2={ P 1, P 2... P n, P i(i=1 2...n) is expressed as the traditional Chinese Painting sample image that scanning obtains; 3. respectively from I 1, I 2The sample image that middle picked at random is set quantity is designated as { I as training sample image collection T 1', I 2', I 1' expression calligraphy training sample image, I 2' expression traditional Chinese Painting training sample image is with I 1, I 2Middle samples remaining image is as the test sample image collection
Figure BDA0000062542250000022
e i(i=1 2...m) is test sample image.
In the described step (5) 1. to train the algorithm of employing based on machine learning model be decision Tree algorithms, artificial neural network, algorithm of support vector machine, a kind of in the Bayesian learning algorithm.
The present invention is owing to take above technical scheme, and it has the following advantages: 1, the present invention is based on the notion of main body, by extracting the feature of traditional Chinese Painting image and handwriting image main body, realized the identification to traditional Chinese Painting image and handwriting image.2, the present invention is by the pre-service to traditional Chinese Painting image and handwriting image, realized the white region that stays of traditional Chinese Painting image and handwriting image background is handled, make outstanding the manifesting of feature of traditional Chinese Painting image and each autonomous agent of handwriting image like this, help feature extraction traditional Chinese Painting image and handwriting image main body.3, the present invention can ignore the influence of the size of traditional Chinese Painting image and handwriting image to the body feature vector when traditional Chinese Painting image and handwriting image body feature are extracted.Therefore, the present invention can be widely used in the identification to traditional Chinese Painting image and handwriting image.
Description of drawings
Fig. 1 is a processing flow chart of the present invention
Fig. 2 is the sample example of employed traditional Chinese Painting image among the present invention
Fig. 3 is the sample example of employed handwriting image among the present invention
Fig. 4 is the pretreated process flow diagram of sample image of the present invention
Fig. 5 is traditional Chinese Painting Image Edge-Detection of the present invention figure as a result
Fig. 6 is handwriting image edge detection results figure of the present invention
Fig. 7 is the figure as a result after expanding in traditional Chinese Painting of the present invention image border
Fig. 8 is the figure as a result after the handwriting image edge swell of the present invention
Fig. 9 is the figure as a result after traditional Chinese Painting image-region of the present invention is filled
Figure 10 is the figure as a result after fill in handwriting image of the present invention zone
Figure 11 is the figure as a result of traditional Chinese Painting image after the pre-service of the present invention's process image
Figure 12 is the figure as a result of handwriting image after the pre-service of the present invention's process image
Figure 13 is the grey level histogram of traditional Chinese Painting image of the present invention
Figure 14 is the grey level histogram of handwriting image of the present invention
Figure 15 is the grey level histogram of the present invention through pretreated traditional Chinese Painting image
Figure 16 is the grey level histogram of the present invention through pretreated handwriting image
Embodiment
Below in conjunction with drawings and Examples the present invention is described in detail.
As shown in Figure 1, the recognition methods of traditional Chinese Painting image of the present invention and handwriting image comprises the steps:
1, scanning China before modern age the traditional Chinese Painting works and the calligraphy work that occurred in history, obtain the sample image of traditional Chinese Painting works and calligraphy work.
As Fig. 2, shown in Figure 3, sample image of the present invention obtains by Epson Expression10000XL scanner scanning.
2, sample image is carried out the data pre-service, extract the feature of sample image main body.
Sample image mainly comprises and stays white region and main scene area, characteristics at sample image self, it is after the unified color that white region is stayed in setting, and remaining main scene area more can be given prominence to the feature of sample image, the main body that therefore to define this main scene area be sample image.
Owing to reason of the remote past, sample image stay white region variable color mostly, these information can be extracted the body feature of sample image and cause interference, the pretreated purpose of image is set at unified color to the white region that stays of sample image, reduces the interference that the sample image body feature is extracted.
As shown in Figure 4, the present invention carries out image pre-service based on Top-down to sample image, and the process of its processing comprises the steps:
(1) at first the color space of sample image is transformed into HSV (hue, saturation, intensity color space) from RGB (red, green, blue color space).The RGB color space is not an even color space, and the distance on the RGB color space can not be represented the visual color similarity of human eye, though this representation is simple bigger with the sensory difference of human eye.Handling the appropriate to the occasion hsv color space of choosing of color characteristic, the hsv color space is by tone H, saturation degree S, three components of brightness V are formed, more approaching with human vision property, wherein tone H represents different colours, and as red, orange, green, its strength component value scope is 0~360, saturation degree S represents the depth of color, its strength component value scope is 0~1, and brightness V represents the bright-dark degree of color, influenced by the light source power, usually measure with number percent, its strength component value scope is 0% to 100%, and wherein black is 0%, is 100% in vain.Hsv color model Munsell (Meng Saier) three dimensions system representation, variation that can feeling of independence fundamental color component, and this color space has linear extendible, the Euclidean distance of the point of coordinate is proportional on appreciable colour-difference and the hsv color space, and the computing formula from the RGB color space conversion to the hsv color space is as follows:
H = arccos ( R - G ) + ( R - B ) 2 ( R - G ) 2 + ( R - B ) ( G - B ) ( B ≤ G ) 2 π - arccos ( R - G ) + ( R - G ) 2 ( R - G ) 2 + ( R - G ) ( G - B ) ( B > G ) ,
S = max ( R , G , B ) - min ( R , G , B ) max ( R , G , B ) ,
V = max ( R , G , B ) 255 .
In the formula, R, G, B represent the strength component value of the red, green, blue of each pixel in the sample image respectively.
(2) as Fig. 5, shown in Figure 6, make the rim detection of Canny operator on the V component of sample image in the hsv color space, comprise the steps:
1. utilize the level and smooth sample image of Gaussian filter, two-dimensional Gaussian function is:
G ( x , y ) = 1 2 π δ 2 exp ( - ( x 2 + y 2 ) 2 δ 2 )
In the formula, δ represents that variance is the parameter of Gaussian filter, and it is controlling level and smooth degree, and x, y are the coordinates that generates Gaussian mask, calculate suitable mask with this formula, realizes that with the standard convolution Gauss is level and smooth, and the Gaussian mask that calculates is as shown below:
2 4 5 4 2
4 9 12 9 4
5 12 15 12 5
4 9 12 9 4
2 4 5 4 2
2. use the Sobel gradient operator to calculate the gradient estimated value of each pixel.
The Sobel gradient operator has two 3 * 3 convolution kernel: G xGradient component for horizontal direction; G yBe the gradient component of vertical direction, its computing formula is as follows:
G x = - 1 0 1 - 2 0 2 - 1 0 1 G y = 1 0 1 0 0 0 - 1 - 2 - 1 ,
Gradient magnitude or edge strength computing formula are:
|G|=|G x|+|G y|。
3. if the gradient component G of horizontal direction xGradient component G with vertical direction yKnown, the deflection computing formula is:
θ=arctan(G y/G x),
If the gradient component G of horizontal direction xBe 0, deflection depends on the gradient component G of vertical direction y:
Figure BDA0000062542250000053
4. each pixel has only 4 possible directions to link to each other with neighbor pixel in the sample image: 0 ° (horizontal direction), and 45 ° (over against angular direction), 90 ° (vertical direction), 135 ° (negative) to the angular direction, deflection is arrived following 4 angles by standard:
0°∶0°~22.5°,157.5°~180°;45°∶22.5°~67.5°;
90°∶67.5°~112.5°;135°∶112.5°~157.5°。
If the 5. Grad maximum on the deflection direction of the pixel in the sample image then keeps, otherwise this pixel is removed, the set that the point of all pixels of sample image Grad maximum on the deflection direction constitutes is the set of possible marginal point.
6. set two Grads threshold, a high threshold TH, a low threshold value TL, high threshold TH generally get 2~3 times of low threshold value TL.From the set of possible marginal point, remove the pixel of Grad earlier less than high threshold TH, get marginal point set F, handle the pixel set M of Grad between high and low threshold value again, if having in marginal point set F, faces some the point among the M, then this point is added marginal point set F, the marginal point that finally obtains set F is exactly the marginal point set of sample image.
(3) as Fig. 7, shown in Figure 8, the sample image of the above-mentioned rim detection that obtains is carried out expansion process, comprise the steps:
Suppose that A is the edge of the above-mentioned sample image that obtains, B is that (why select this structural element for use, be because by experiment to one 4 * 4 the structural element of stating, draws under this situation, recognition result is best), A is followed following formula as edge swell:
A ⊕ B = { z | [ ( B ^ ) z ∩ A ] ⊆ A } ,
In the formula, A and B are z 2Set in (two-dimentional integer space), z is one of them element,
Figure BDA0000062542250000062
Be meant expansive working,
Figure BDA0000062542250000063
The reflection of expression B moves to a z=(z 1, z 2).
(4) as Fig. 9, shown in Figure 10, the above-mentioned sample image that obtains edge swell to be carried out the zone fill, the zone filling mainly is expansion, supplement and the common factor based on set, the formula that fill in the zone is as follows:
X k = ( X k - 1 ⊕ B ) ∩ A c ( k = 1,2,3 , . . . ) ,
In the formula, A cThe supplementary set of expression A, X K-1Be a bit in the fill area, k is the step number of algorithm iteration.
(5) as Figure 11, shown in Figure 12, handle the background area of the sample image that the above-mentioned zone that obtains is filled.Think to stay white region for the background area beyond the fill area in the sample image, stay white region to be designated as I_B, statistics is stayed the colouring information of white region, mainly adds up the mean value of each component in the hsv color space: Ave_H, Ave_S, Ave_V, and formula is as follows:
Ave _ H = 1 k Σ i = 1 k h k ( h k ∈ I _ B ) ,
Ave _ S = 1 k Σ i = 1 k s k ( s k ∈ I _ B ) ,
Ave _ V = 1 k Σ i = 1 k v k ( v k ∈ I _ B ) ,
In the formula, h k, s k, v kThe strength component value of representing tone, saturation degree and brightness respectively.
The sample image that preliminary sweep is obtained carries out the traversal of full figure individual element, the H in the hsv color space of each pixel of sample image, S, the value of each component of V respectively with the hsv color space in the mean value Ave_H of each component, Ave_S, Ave_V does the difference computing, difference operation result and threshold value T_P compared (this threshold value T_P draws by experiment, threshold range is between 0.15~0.2), think to stay white region for the pixel within threshold value T_P scope, it is unified color that white region is stayed in setting, be set at white herein (as example, be not limited thereto), its computing formula is:
Figure BDA0000062542250000071
Wherein, white is meant and is set to white that unchange is meant that the original sample image of maintenance is constant, i_pex represents each pixel in the sample image, i_pex_h, i_pex_s, i_pex_v represent it is H, the S on the hsv color space on each pixel of sample image, the value of each component of V.
As Figure 13~shown in Figure 16, the present invention is from traditional Chinese Painting works and calligraphy work creative feature, and promptly calligraphy work is comparatively even with China ink, and the traditional Chinese Painting works want stereovision with China ink; Through after the above-mentioned data pre-service, make that the main body of sample image is more outstanding, wherein, horizontal ordinate is represented the exponent number 0~255 of gray scale in grey level histogram, the gray scale exponent number is totally 256 rank, and ordinate is the number that the pixel in the statistical sample image occurs at certain gray scale exponent number.
3, from sample image, choose training sample image and test sample image at random.
Sample image is divided into training sample image and test sample image, and training sample image and test sample image labeling method comprise the steps: (1) definition sample image classification, are numbered 1 or 0,1 expression traditional Chinese Painting sample image, 0 expression calligraphy sample image; (2) supposing to be used for sample image to be identified is I, is labeled as { I 1, I 2, I wherein 1Expression calligraphy sample image is designated as I 1={ C 1, C 2... C n, C i(i=1 2...n) is expressed as the calligraphy sample image that scanning obtains, I 2Expression traditional Chinese Painting sample image is designated as I 2={ P 1, P 2... P n, P i(i=1 2...n) is expressed as the traditional Chinese Painting sample image that scanning obtains; (3) respectively from I 1, I 2The sample image that middle picked at random is set quantity is designated as { I as training sample image collection T 1', I 2', I 1' expression calligraphy training sample image, I 2' expression traditional Chinese Painting training sample image is with I 1, I 2Middle samples remaining image is as the test sample image collection
Figure BDA0000062542250000072
e i(i=1 2...m) is test sample image.
4, finish on the pretreated basis of sample image, training sample image is carried out body feature extract, and training classifier.The process that the training sample image body feature is extracted is as follows: the grey level histogram that training sample image is obtained training sample image through above-mentioned image pre-service, gray scale is 256 rank, count the number of times summation Total that occurs at sample image for bin between each chromatic zones in the grey level histogram, the last body feature vector that generates one 256 dimension of each training sample image.Consider the size of different training sample image, adopt following formula to calculate among the present invention:
Total _ bin = Total Wide × High ,
Wherein, Wide and High represent the wide and high of training sample image respectively.
The present invention adopts the recognition methods of support vector machine (as example, being not limited thereto) that training sample image is trained, and after training, obtains the model model of a sample image sorter.The kit that this experiment adopts LIBSVM to provide is tested, and can utilize following function model to represent:
model=svmtrain(T_F,label,options)
In the above-mentioned function call, svmtrain is the support vector machine computing, T_F represents the body feature vector of the training sample image extracted, label represents the class label of corresponding training sample image, and value 0 or 1 is represented calligraphy sample image and traditional Chinese Painting sample image respectively here, options is that parameter is selected, parameter options='-t2-s0-b1-c1 ' for example, the implication of expression is that kernel function is intersection kernel, the SVM type is C-svc; The C-svc penalty coefficient is 1, and needs probability estimate.
5, from test sample image, extract the body feature vector of test sample image, and discern, finish the identification of test sample image, comprise the steps: with the sample image sorter that trains
(1) test sample image is carried out the data pre-service; (2) to through the pretreated test sample image of data, carry out body feature and extract, generate the body feature vector of test sample image; (3) the body feature vector of test sample image is imported the sample image sorter that trains, obtain recognition result.
The recognition methods that recognition result demonstration test of the present invention adopts is that support vector machine is (as example, be not limited thereto) in the MatlabR2008A software platform, obtain the predict the outcome pre and the accuracy rate acc of test sample image, support vector machine is used and is handled with minor function:
[pre?acc]=svmpredict(label_1,H_F,model,‘-b?1’),
In the above-mentioned function call, svmpredict is an anticipation function, and label_1 is the class label of test sample image, and H_F is the body feature vector that test sample image generates, and model is the sample image sorter that trains.
The result of identification can adopt following formula:
P = n _ R N _ Total ,
In the formula, n_R is the number of the test sample image that identifies, and N_Total is the number of test sample image.
The present invention is by verifying explanation to the test findings of following traditional Chinese Painting image and handwriting image, testing employed sample image obtains by scanning " Chinese painting complete or collected works " and " complete or collected works of Chinese calligraphy ", make it as the sample image storehouse, then therefrom at random choose training sample image and test sample image, as shown in the table:
Figure BDA0000062542250000082
The result who obtains through test is as follows:
P = n _ R N _ Total = 238 240 = 0.992 .
The above results shows, utilizes image-recognizing method of the present invention to obtain very desirable recognition result, helps the mark and the retrieval of traditional Chinese Painting image and handwriting image.
The various embodiments described above only are used to illustrate the present invention; each step all can change to some extent; on the basis of technical solution of the present invention, all improvement and equivalents of individual steps and proportioning being carried out according to the principle of the invention all should not got rid of outside protection scope of the present invention.

Claims (3)

1. one kind based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image, and it comprises the steps:
(1) utilize scanner scanning China before modern age the traditional Chinese Painting works and the calligraphy work that occurred in history, obtain the sample image of traditional Chinese Painting works and calligraphy work;
(2) sample image is carried out image pre-service based on Top-down, it comprises the steps:
1. with sample image from the RGB color space conversion to the hsv color space;
2. the sample image in the hsv color space that 1. step is obtained is done the rim detection of Canny operator;
3. the sample image of the rim detection that 2. step is obtained is done the processing of edge swell;
4. the sample image of the edge swell that 3. step is obtained is done the processing that fill in the zone;
5. the statistics of colouring information is carried out in the background area beyond the fill area of the zone that 4. step the is obtained sample image of filling, and comprises each component in the hsv color space is added up, and draws mean value Ave_H, Ave_S, the Ave_V of each component;
6. the sample image that preliminary sweep is obtained carries out the traversal of full figure individual element, H, the S in the hsv color space of each pixel of sample image, the value of each component of V respectively with the hsv color space in mean value Ave_H, Ave_S, the Ave_V of each component do the difference computing, difference operation result and threshold value are compared, pixel in threshold range thinks to stay white region, is set to be unified color;
(3) picked at random training sample image and test sample image from the sample image that scanning obtains;
(4) the body feature vector of extraction training sample image from training sample image, and training classifier, it comprises the steps:
1. pass through the grey level histogram that step (2) image pre-service obtains training sample image, gray scale is 256 rank;
2. count the number of times summation Total that in training sample image, occurs for bin between each chromatic zones in the training sample image grey level histogram, the last body feature vector that generates one 256 dimension of each training sample image, finish the extraction of the body feature vector of training sample image, consider the size of different training sample image, adopt following formula to calculate:
Total _ bin = Total Wide × High ,
Wherein, Wide, High represent the wide and high of training sample image respectively;
(5) training sample image sorter extracts the body feature vector from test sample image, utilize the sample image sorter that trains to discern, and it comprises the steps:
1. to the body feature vector of the training sample image extracted, train the sample image sorter that obtains training based on machine learning model;
2. extract the body feature vector of test sample image;
3. for the body feature vector of the test sample image of extracting, the sample image sorter that utilization trains is discerned, and draws the result of identification.
2. as claimed in claim 1 a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image, it is characterized in that: described step (3) picked at random training sample image and test sample image from the sample image that scanning obtains comprise the steps 1. to define the sample image classification, be numbered 1 or 0,1 expression traditional Chinese Painting sample image, 0 expression calligraphy sample image; 2. being used for sample image to be identified is I, is labeled as { I 1, I 2, I wherein 1Expression calligraphy sample image is designated as I 1={ C 1, C 2... C n, C i(i=1 2...n) is expressed as the calligraphy sample image that scanning obtains, I 2Expression traditional Chinese Painting sample image is designated as I 2={ P 1, P 2... P n, P i(i=1 2...n) is expressed as the traditional Chinese Painting sample image that scanning obtains; 3. respectively from I 1, I 2The sample image that middle picked at random is set quantity is designated as { I as training sample image collection T 1', I 2', I 1' expression calligraphy training sample image, I 2' expression traditional Chinese Painting training sample image is with I 1, I 2Middle samples remaining image is as the test sample image collection
Figure FDA0000062542240000021
e i(i=1 2...m) is test sample image.
3. as claimed in claim 1 or 2 a kind of based on the traditional Chinese Painting image of main body and the recognition methods of handwriting image, it is characterized in that: in the described step (5) 1. to train the algorithm of employing based on machine learning model be decision Tree algorithms, artificial neural network, algorithm of support vector machine, a kind of in the Bayesian learning algorithm.
CN 201110131873 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject Expired - Fee Related CN102147867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110131873 CN102147867B (en) 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110131873 CN102147867B (en) 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject

Publications (2)

Publication Number Publication Date
CN102147867A true CN102147867A (en) 2011-08-10
CN102147867B CN102147867B (en) 2012-12-12

Family

ID=44422124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110131873 Expired - Fee Related CN102147867B (en) 2011-05-20 2011-05-20 Method for identifying traditional Chinese painting images and calligraphy images based on subject

Country Status (1)

Country Link
CN (1) CN102147867B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842046A (en) * 2012-08-07 2012-12-26 天津大学 Calligraphic style identification method based on overall feature extracting and training
CN103336943A (en) * 2013-06-04 2013-10-02 广东药学院 A microscopic image identification method for determining added medicaments in animal feed
CN103336942A (en) * 2013-04-28 2013-10-02 中山大学 Traditional Chinese painting identification method based on Radon BEMD (bidimensional empirical mode decomposition) transformation
CN103955718A (en) * 2014-05-15 2014-07-30 厦门美图之家科技有限公司 Image subject recognition method
CN104780465A (en) * 2015-03-25 2015-07-15 小米科技有限责任公司 Frame parameter adjusting method and device
CN106372656A (en) * 2016-08-30 2017-02-01 同观科技(深圳)有限公司 Depth one-time learning model obtaining method and device and image identification method and device
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN110427990A (en) * 2019-07-22 2019-11-08 浙江理工大学 A kind of art pattern classification method based on convolutional neural networks
CN110877019A (en) * 2018-09-05 2020-03-13 西门子(中国)有限公司 Traditional Chinese medicinal material impurity removing device and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070195344A1 (en) * 2006-02-01 2007-08-23 Sony Corporation System, apparatus, method, program and recording medium for processing image
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101334835A (en) * 2008-07-28 2008-12-31 上海高德威智能交通系统有限公司 Color recognition method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070195344A1 (en) * 2006-02-01 2007-08-23 Sony Corporation System, apparatus, method, program and recording medium for processing image
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101334835A (en) * 2008-07-28 2008-12-31 上海高德威智能交通系统有限公司 Color recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《中国优秀硕士学位论文全文数据库》 200810 杜雅娟 国画特征提取与分类算法的研究 全文 1-3 , *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842046B (en) * 2012-08-07 2015-09-23 天津大学 A kind of calligraphic style recognition methods of extracting based on global characteristics and training
CN102842046A (en) * 2012-08-07 2012-12-26 天津大学 Calligraphic style identification method based on overall feature extracting and training
CN103336942A (en) * 2013-04-28 2013-10-02 中山大学 Traditional Chinese painting identification method based on Radon BEMD (bidimensional empirical mode decomposition) transformation
CN103336943A (en) * 2013-06-04 2013-10-02 广东药学院 A microscopic image identification method for determining added medicaments in animal feed
CN103336943B (en) * 2013-06-04 2016-06-08 广东药学院 For judging animal-feed is added the microscopic image identification method of medicine
CN103955718A (en) * 2014-05-15 2014-07-30 厦门美图之家科技有限公司 Image subject recognition method
CN104780465A (en) * 2015-03-25 2015-07-15 小米科技有限责任公司 Frame parameter adjusting method and device
CN104780465B (en) * 2015-03-25 2018-09-04 小米科技有限责任公司 Frame parameter adjusting method and device
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN106372656A (en) * 2016-08-30 2017-02-01 同观科技(深圳)有限公司 Depth one-time learning model obtaining method and device and image identification method and device
CN106372656B (en) * 2016-08-30 2019-05-10 同观科技(深圳)有限公司 Obtain method, image-recognizing method and the device of the disposable learning model of depth
CN110877019A (en) * 2018-09-05 2020-03-13 西门子(中国)有限公司 Traditional Chinese medicinal material impurity removing device and method
CN110427990A (en) * 2019-07-22 2019-11-08 浙江理工大学 A kind of art pattern classification method based on convolutional neural networks

Also Published As

Publication number Publication date
CN102147867B (en) 2012-12-12

Similar Documents

Publication Publication Date Title
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN105261017B (en) The method that image segmentation based on road surface constraint extracts pedestrian's area-of-interest
CN105046196B (en) Front truck information of vehicles structuring output method based on concatenated convolutional neutral net
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN103049763B (en) Context-constraint-based target identification method
Fritsch et al. Monocular road terrain detection by combining visual and spatial information
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN101673338B (en) Fuzzy license plate identification method based on multi-angle projection
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN102629322B (en) Character feature extraction method based on stroke shape of boundary point and application thereof
CN106156684B (en) A kind of two-dimensional code identification method and device
CN105354568A (en) Convolutional neural network based vehicle logo identification method
CN108564120B (en) Feature point extraction method based on deep neural network
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN104778701A (en) Local image describing method based on RGB-D sensor
CN106909902A (en) A kind of remote sensing target detection method based on the notable model of improved stratification
CN113408584B (en) RGB-D multi-modal feature fusion 3D target detection method
CN101266654A (en) Image text location method and device based on connective component and support vector machine
CN102024156A (en) Method for positioning lip region in color face image
CN103186790A (en) Object detecting system and object detecting method
CN102147812A (en) Three-dimensional point cloud model-based landmark building image classifying method
CN110390228A (en) The recognition methods of traffic sign picture, device and storage medium neural network based
Meng et al. Text detection in natural scenes with salient region
CN105405138A (en) Water surface target tracking method based on saliency detection
CN107992856A (en) High score remote sensing building effects detection method under City scenarios

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121212

Termination date: 20150520

EXPY Termination of patent right or utility model