CN101344928B - Method and apparatus for confirming image area and classifying image - Google Patents

Method and apparatus for confirming image area and classifying image Download PDF

Info

Publication number
CN101344928B
CN101344928B CN 200710136257 CN200710136257A CN101344928B CN 101344928 B CN101344928 B CN 101344928B CN 200710136257 CN200710136257 CN 200710136257 CN 200710136257 A CN200710136257 A CN 200710136257A CN 101344928 B CN101344928 B CN 101344928B
Authority
CN
China
Prior art keywords
image
value
pixel
color
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200710136257
Other languages
Chinese (zh)
Other versions
CN101344928A (en
Inventor
王健民
陈新武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN 200710136257 priority Critical patent/CN101344928B/en
Publication of CN101344928A publication Critical patent/CN101344928A/en
Application granted granted Critical
Publication of CN101344928B publication Critical patent/CN101344928B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosed invention relates to a method used for determining the image region of an image. The method includes the following steps: the confidence value of a pixel in the image is calculated, and the type of the pixel and the image region of the pixel of the same type are determined by the confidence value. The method can significantly improve the recall ratio of the image and obtain a classification result better than the classification result obtained by the existing method.

Description

Be used for determining image-region and the method and apparatus that image is classified
Technical field
Relate generally to of the present invention be used for to be determined image-region and image kind and image is divided into the method and apparatus of a plurality of kinds.
Background technology
Classify the image as the kind with semantic meaning and be based on of the tool challenge of the major issue of image retrieval of content and computer vision.By Characteristic of Image it being classified is the result that applicable and further processing will be depended on classification.Image retrieval is very near classification.In image indexing system, the feature of extracting from image will be stored in the database together with image, and the image that has after a while a similar characteristics will extract according to input feature vector.In this, searching system can be used as the application of sorting technique.Current, make a lot of work in this field and in document in recent years, proposed various systems.Yet many features of extracting from whole image will be directly used in Images Classification in current algorithm (for example color histogram drawing method), but the defective of low recall ratio still exists.
Therefore, need a kind ofly can to classify and obtain method than the better classification results of existing method image.
Summary of the invention
The purpose of this invention is to provide a kind of for definite image-region and classify the image as a plurality of kinds, thereby obtain method and apparatus than the better classification results of prior art.
In order to obtain above-mentioned purpose, according to an aspect of the present invention, provide a kind of method of the image-region for determining image, comprise step: calculate the color characteristic as each statistical value of each color component of pixel in the described image; Described image is carried out wavelet transformation, and the textural characteristics that calculates adjacent area is as each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation; By described color and textural characteristics are applied the value of the confidence that decision function calculates pixel described in the described image; Determine the kind of described pixel by described the value of the confidence; And the image-region of the pixel of definite identical type.
According to a further aspect in the invention, provide a kind of method be used to classifying the image as a plurality of kinds, comprise step: calculate the color characteristic as each statistical value of each color component of pixel in the described image; Described image is carried out wavelet transformation, and the textural characteristics that calculates adjacent area is as each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation; By described color and textural characteristics are applied the value of the confidence that decision function calculates pixel described in the described image; Described the value of the confidence and positional information by pixel in the described image are determined characteristics of image; And by described characteristics of image described Images Classification is become described a plurality of kind.
Still according to an aspect of the present invention, provide a kind of method of the kind for determining image, comprise step: calculate the color characteristic as each statistical value of each color component of pixel in the described image; Described image is carried out wavelet transformation, and the textural characteristics that calculates adjacent area is as each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation; By described color and textural characteristics are applied the value of the confidence that decision function calculates pixel described in the described image; And the kind of determining image based on the described the value of the confidence of pixel in the described image.
Still according to an aspect of the present invention, a kind of equipment that uses computer program to determine the image-region of image is provided, comprise the computing module for the value of the confidence of computed image pixel, be used for determining by the value of the confidence the determination module of image-region of the pixel of the kind of pixel and identical type, wherein calculate the value of the confidence by described computing module and comprise in addition following step: calculate the color characteristic as each statistical value of each color component of pixel in the described image; Described image is carried out wavelet transformation, and the textural characteristics that calculates adjacent area is as each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation; And by described color and textural characteristics are applied the described the value of the confidence that decision function calculates pixel described in the described image.
According to a further aspect in the invention, a kind of equipment that uses computer program to classify the image as a plurality of kinds is provided, comprise the computing module for the value of the confidence of computed image pixel, for the determination module of determining characteristics of image by the value of the confidence and the positional information of pixel, and the sort module that is used for classifying the image as by described characteristics of image a plurality of kinds, wherein calculate the value of the confidence by described computing module and comprise in addition following step: calculate the color characteristic as each statistical value of each color component of pixel in the described image; Described image is carried out wavelet transformation, and the textural characteristics that calculates adjacent area is as each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation; And by described color and textural characteristics are applied the described the value of the confidence that decision function calculates pixel described in the described image.
Still according to an aspect of the present invention, a kind of equipment that uses computer program to determine the kind of image is provided, comprise the computing module for the value of the confidence of computed image pixel, be used for determining based on the value of the confidence of image pixel the determination module of the kind of image, wherein calculate the value of the confidence by described computing module and comprise in addition following step: calculate the color characteristic as each statistical value of each color component of pixel in the described image; Described image is carried out wavelet transformation, and the textural characteristics that calculates adjacent area is as each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation; And by described color and textural characteristics are applied the described the value of the confidence that decision function calculates pixel described in the described image.
Be compared to prior art, use of the present invention can improve significantly the recall ratio of image and obtain than the better classification results of existing method.
In addition, the classified image and the zone thereof that obtain by the present invention also can be used in the image improvement processing to obtain better visual effect.
Description of drawings
According to the detailed description of the exemplary embodiment of reading below in conjunction with accompanying drawing, above and other purpose of the present invention, Characteristics and advantages will become obvious.
Fig. 1 is the process flow diagram for the method that classifies the image as a plurality of kinds according to an embodiment of the invention;
Fig. 2 illustrates some pixels of how selecting according to an embodiment of the invention in image;
Fig. 3 illustrates the clip image piece according to each pixel of selecting of an embodiment of the invention in Fig. 2;
The schematically illustrated wavelet transformation according to an embodiment of the invention of Fig. 4 is to obtain the process of wavelet character;
Fig. 5 is the process flow diagram of employed Adaboost algorithm in an embodiment of the invention;
Fig. 6 is the process flow diagram of the Weaklearn (weak study) in the Adaboost algorithm of Fig. 5;
Fig. 7 is the schematic block diagram for the equipment that classifies the image as a plurality of kinds according to an embodiment of the invention;
Fig. 8 is according to the schematic block diagram of the equipment of the image-region that is used for definite image of an embodiment of the invention;
Fig. 9 is the schematic block diagram according to the kind that is used for definite image of an embodiment of the invention; And,
Figure 10 represents the illustrative application according to an embodiment of the invention.
Embodiment
With reference now to accompanying drawing, the present invention is described in detail.
Fig. 1 shows the process flow diagram for the method that classifies the image as a plurality of kinds according to an embodiment of the invention.In step 100, the image that input will be classified.In step 110, image is reduced to predetermined size, and for example 19200.If the zone of image then narrows down to 19200 with it greater than 19200.The possibility of result of convergent-divergent is inaccurate but this is not too important.Dwindling algorithm can be a kind of linear transformation.After reduction operation, the ratio of picture traverse and height is constant.If the area of this image is not more than 19200, then it will directly be used by processing subsequently.In step 120, the value of the confidence of calculating pixel (also can be calculated by the computing module 400 among Fig. 7 and Fig. 8 and 500 respectively) will be described its details hereinafter.
In order to calculate the value of the confidence, at first select some pixels of the image dwindle.For the pixel of selecting, calculate color characteristic and wavelet character according to the adjacent block zone of pixel.This feature of two types will consist of a kind of feature, can obtain the value of the confidence of pixel by this feature.As shown in Figure 2, in 9 adjacent pixels, only select a pixel (being represented by grey).Yet, should be noted that system of selection is not limited to method described here.As shown in Figure 3, for coming shearing width and be the image block of 8 pixels highly by one in the pixel of the selection of black indication.In addition, should be noted that cutting method is not limited to method described here.For not having the pixel of the selection in enough spaces near image boundary and for the cutout image, they will be left in the basket.Then carry out the calculating of color characteristic, this color characteristic comprises r, the g of execution block pixel and average, variance yields and the covariance value of b component.Can express by following equation:
1. the average of piece pixel:
f ( 1 ) = Σ i = 1 N r ( i ) N , f ( 2 ) = Σ i = 1 N g ( i ) N , f ( 3 ) = Σ i = 1 N b ( i ) N
2. the variance yields of piece pixel:
f ( 4 ) = Σ i = 1 N ( r ( i ) - f ( 1 ) ) 2 N , f ( 5 ) = Σ i = 1 N ( g ( i ) - f ( 2 ) ) 2 N , f ( 6 ) = Σ i = 1 N ( b ( i ) - f ( 3 ) ) 2 N
3. the covariance value of piece pixel:
f ( 7 ) = Σ i = 1 N ( r ( i ) - f ( 1 ) ) × ( g ( i ) - f ( 2 ) ) N
f ( 8 ) = Σ i = 1 N ( r ( i ) - f ( 1 ) ) × ( b ( i ) - f ( 3 ) ) N
f ( 9 ) = Σ i = 1 N ( g ( i ) - f ( 2 ) ) × ( b ( i ) - f ( 3 ) ) N
F (1) wherein ..., f (9) representation feature, N represents the number of pixels of piece, its in the present invention preferably 64, the pixel of piece can be registered as pixel i and r (i), g (i) and b (i) represent pixel r, g and b component respectively.
With reference now to Fig. 4,, it schematically shows wavelet transformation according to an embodiment of the invention to obtain the process of feature.The method uses 2 grades of small echos (the existing Haar of use small echo) conversion to obtain feature.Hereinafter will carry out brief description to the process of this conversion.At first, the piece color image is converted to grey piece image.Secondly, by being applied 1 grade of wavelet transformation, grey piece image obtains 1 grade of small echo result.Then 1 grade of wavelet transformation is applied to again small echo result's upper area, makes up twice wavelet transformation result and will obtain by 6 the digital indicated zones among Fig. 4.After this, with consideration of regional 1,2 ... 6 small echo result calculates wavelet character and variance yields thereof.According to following equation from zone 1,2 ... 6 extract 12 dimension wavelet characters.
M ( i ) = Σ j = 1 N abs ( g ( j ) ) N , V ( i ) = Σ j = 1 N ( abs ( g ( j ) ) - M ( i ) ) 2 N
I=1 wherein ..., 6, g (j) be the pixel j among the regional i gray-scale value (j=1 ..., N), N is the number of pixels of regional i, then M (i) is the average gray value of regional i, V (i) is the variance gray-scale value of regional i.
The feature that the color characteristic that is obtained by above-mentioned steps and wavelet character consist of pixel.Since color characteristic be 9 the dimension and wavelet character be 12 the dimension, so the feature of pixel be 21 the dimension and they can be expressed as f (1) ..., f (21).Be called as weighted value by the end value that applies discriminant H (x) (below will explain) resulting H (x).Then obtain the value of the confidence by according to following step the weighted value of scope [0,1] being carried out normalization:
1) makes weighted value=weighted value/10;
2) if weighted value<0 makes weighted value=0; If weighted value>1 makes weighted value=1;
3) weighted value is in scope [0,1] like this, and therefore the value of the confidence is weighted value.
Obtain discriminant H (x) and parameter thereof by Adaboost.With reference now to Fig. 5,, the figure shows the process flow diagram at the employed Adaboost algorithm of embodiments of the present invention.Although the Adaboost algorithm is known to those skilled in the art, it is described tout court will help better to understand this method.
In step 200, input m training sample, that is, and S={ (x 1, y 1), (x 2, y 2) ..., (x m, y m), wherein indicate y i∈ Y={0,1}, wherein x iIt is the example that extracts and represent in some way (being generally the d n dimensional vector n of property value) from certain space X.In the training for the value of the confidence of pixel, x iThe feature of value element: f (1) ..., f (21) (introducing by calculating the value of the confidence), y iBe and x iRelated class mark and only can be representative negate (for example, non-blue sky pixel) sample 0 or representative certainly (for example, blue sky pixel) sample 1.Yet, should be noted that y iValue be not limited to value described here, those skilled in the art also can arrange y according to actual needs iValue.In step 210, initializaing variable t is configured to equal 1, in step 220, carries out D according to following formula t(i) initialization:
D t(i)=1/m i=1,2,...,m,t=1,2,...,T (3)
D wherein 1The training sample S={ (x of the first round 1, y 1), (x 2, y 2) ..., (x m, y m) distribution and utilize and evenly to distribute initialization.In each iteration based on previous D T-1Calculate D subsequently tThen process flow diagram advances to iterative part, by being illustrated by the indicated following step of numeral.
1) calls weak study (describing after a while), provide mistake indicia distribution D to it tAnd obtain supposing h t(X)->[1,1], h tThe real number of output from-1 to 1 is worth greatlyr, and then sample more may be sure sample.On the other hand, be worth littlely, then sample more may be the sample (step 230) negating.2) calculate h tPuppet loss
ϵ t = Σ i = 1 m 1 - ( y i * 2 - 1 ) * h t ( x i ) 2 D t ( i )
ε tTo estimate hypothesis h tThe value of quality degree.If ε tZero, h then tDry straight.On the other hand, if ε tLarger, ε then tThe result worse.Should be noted that this error is for distribution D t(i) measure.
3) β t is arranged to equal ε t/ (1-ε t), β wherein tOnly be temporary variable for simplicity and do not have substantial connotation.
4) renewal distribution D t(i):
D t + 1 ( i ) = D t ( i ) Z t β t 1 + ( y i * 2 - 1 ) * h t ( x i ) 2
Z wherein tThat the normalization constant (is selected so that D like this T+1To be to distribute).The weight of example and certain number multiply each other, so that can be based on D tCalculate D T+1Then can be by coming again normalized weight divided by the normalization constant.Effectively, " easily " sample of being classified by many previous weak hypothesis will obtain lower weight, and often will be obtained higher weight by " difficulty " sample of misclassification.Therefore, this method will focus on to be the most difficult sample (step 240) to weak study to the weight of maximum.
This algorithm will continue the T wheel, and last algorithm is with weak hypothesis h 1, h 2..., h TMerge to single final hypothesis:
H ( x ) = Σ i = 1 T log ( 1 β i ) g ( ( f ( k i ) × s i - r i ) / q i )
Wherein
Figure G071D6257X20070719D000064
Fig. 6 is the process flow diagram of weak study of the Adaboost algorithm of Fig. 5, by being illustrated by the indicated following step of numeral.
1) input
The weak learning algorithm that uses receives m training set S={ (x 1, y 1), (x 2, y 2) ..., (x m, y m) with and at the distribution D={D (1) of the t of Adaboost algorithm wheel, D (2) ..., D (m) (step 300).
2) iteration
Carry out k=1,2,3 ... the iteration of d (d is the dimension of pattern space X)
1. obtain k pattern, P k={ p K, 1, p K, 2... p K, m}
p K, i=x I, k, wherein x i = x i , 1 , x i , 2 , · · · , x i , d ‾ (this means that xi is x I, 1, x I, 2..., x I, dMean value) (step 320).
2. pass through P kTraining sample is sorted, so that for i<j arbitrarily, p K, i<p K, j(step 330).
3. obtain symbol weight w k={ w K, 1, w K, 2... w K, m, w K, i=(y i* 2-1) * D (i), thereby for sure (negating) sample w K, iPositive (negative).
4. obtain negative weight and S 0 k = &Sigma; w k , i < 0 - w k , i (positive number).
5. obtain symbol weight c k={ c K, 1, c K, 2... c K, mAccumulation and
c k , i = &Sigma; j = 1 i w k , j + S 0 k - 0.5
D6. find index i1 and i2, so that c K, i1=max{c K, 1, c K, 2... c K, m-1And c K, i2=min{c K, 1, c K, 2..., c K, m-1}
7. if c K, i1>-c K, i2, then make S k=1, rk=p K, id1/ 2+p K, id1+1/ 2,
Otherwise make S k=-1, r k=-p K, id2/ 2-p K, id2+1/ 2,
8. obtain q k = 1 2 &Sigma; i = 1 m p k , i * p k , i * D ( i ) &Sigma; i = 1 m D ( i ) - ( &Sigma; i = 1 m p k , i * D ( i ) &Sigma; i = 1 m D ( i ) ) 2 + 0.00001
9. acquisition loss function
&epsiv; k = &Sigma; i = 1 m 1 - ( y i * 2 - 1 ) * g ( ( p k , i * s k - r k ) / q k ) 2 D t ( i ) , Wherein
Figure G071D6257X20070719D000076
3) after the iteration
After iteration, select best k, that is, k 0 = arg min ( &epsiv; k ) k = 1,2,3 , . . . , d (step 370).
4) output
The final output hypothesis of weak study is
H (x i)=g ((x I, k0* s K0-r K0)/q K0), wherein
Figure G071D6257X20070719D000081
Suppose at the t wheel to obtain weak learning assumption, then h gets the h that uses in the iterative step t
Turn back to now Fig. 1, in step 130, determined the kind of pixel by the value of the confidence of pixel.To determine blue sky pixel and non-blue sky pixel as example, if the value of the confidence of pixel is greater than 0.00001 (here only being exemplary), then pixel is considered to the blue sky pixel, otherwise to be considered to be non-blue sky pixel.Therefore, determine the zone, blue sky in step 160 based on the previous blue sky pixel of determining.Should be noted that with the present invention and determine that zone, blue sky and zone, non-blue sky only are exemplary and not restrictive, and it can be used to any regional kind definite as expectation.In step 140, come the feature of computed image by the value of the confidence of the pixel of selecting.In step 140, can be come by selected pixel the value of the confidence the feature of computed image.Characteristics of image is the feature of 6 dimensions, and the process of calculating hereinafter is shown:
Order sumw = &Sigma; i = 1 N w ( i ) , Make eps=0.001
If (sumw<eps)
Make img (1)=-1, img (2)=-1, img (3)=-1, img (4)=-1, img (5)=-1, img (6)=-1.Under this condition ,-1 only is can be other value for the mark of feature and its.In next step, this algorithm is with judge mark and do a little judgements.
Otherwise order:
img ( 1 ) = sumw N img ( 2 ) = &Sigma; i = 1 N w ( i ) * x ( i ) sumw img ( 3 ) = &Sigma; i = 1 N w ( i ) * y ( i ) sumw
img ( 4 ) = &Sigma; i = 1 N w ( i ) * x ( i ) * x ( i ) sumw - ( &Sigma; i = 1 N w ( i ) * x ( i ) ) * ( &Sigma; i = 1 N w ( i ) * x ( i ) ) sumw * sumw
img ( 5 ) = &Sigma; i = 1 N w ( i ) * y ( i ) * y ( i ) sumw - ( &Sigma; i = 1 N w ( i ) * y ( i ) ) * ( &Sigma; i = 1 N w ( i ) * y ( i ) ) sumw * sumw
img ( 6 ) = ( &Sigma; i = 1 N w ( i ) * x ( i ) * y ( i ) sumw - ( &Sigma; i = 1 N w ( i ) * x ( i ) ) * ( &Sigma; i = 1 N w ( i ) * y ( i ) ) sumw * sumw ) / ( ( img ( 4 ) + eps ) * ( img ( 5 ) + eps ) )
For the pixel i that selects, its value of the confidence is represented as w (i), and its coordinate is expressed as x (i) and y (i) respectively.N is the number of the pixel of selection.Above-mentioned img (1) ... img (6) is characteristics of image.In step 150, utilize characteristics of image that image is classified, illustrate by following process:
1) if (img (1)<0) or (img (1)=-1 and img (2)=-1 and img (3)=-1 and img (4)=-1 and img (5)=-1 and img (6)=-1) makes res=-10; Otherwise to feature img (1) ..., img (6) applies discriminant H (x) (below will explain), makes the end value of res=H (x).
2) if (res>0), then image is classified as blue sky, otherwise is classified as non-blue sky.Similarly, should be noted that classifying the image as blue sky image and non-blue sky image with the present invention only is exemplary rather than restriction, and it can be used to classify the image as any kind such as expectation.
H ( x ) = &Sigma; i = 1 T log ( 1 &beta; i ) g ( ( img ( k i ) * s i - r i ) / q i )
Wherein
Figure G071D6257X20070719D000093
Obtain discriminant H (X) and its parameter by Adaboost.Training process is identical with the front, and some following differences are only arranged:
1. in training set, be image rather than pixel at this part training sample; The blue sky image is used as sure sample but not the blue sky image is used as the sample negating.
2. in training sample, for S set, xi presentation video feature: img (1) ..., img (6) (being introduced by above-mentioned part), for yi, 0 representative non-blue sky image pattern and 1 represent the blue sky image pattern.
Fig. 7, Fig. 8 and Fig. 9 show respectively the schematic block diagram that image and respective regions thereof are classified of being used for according to an embodiment of the invention.In Fig. 7, computing module 400, determination module 410 and sort module 420 can be finished the respective process in the step 120,140 and 150 of Fig. 1.In Fig. 8, computing module 500 and determination module 510 can be finished the step 120 of Fig. 1,130 and 160 respective process.In Fig. 9, the value of the confidence of each pixel in computing module 600 computed image and determination module 610 is determined the kind of image based on the value of the confidence of the pixel in the image.
Figure 10 illustrates can be according to the illustrative application of one embodiment of the present invention.At step 700 input picture.In step 710, by as according to the equipment among the method among Fig. 1 of the present invention or Fig. 7,8 and 9 image or its zone are carried out classification and determination.In step 720, image improvement is processed and is applied to image or its zone based on classification results.Obtain at last improved image.For example, the hypothetical target kind be the coloured image of blue sky and input with processed, at first, image is by method 110 convergent-divergents of Fig. 1.By the value of the confidence (this value for example is 0.1) of the pixel of method 120 computed image of Fig. 1, determine the kind of pixels by 130 of Fig. 1, then some pixels are classified as the blue sky pixel and other are classified as non-blue sky pixel.In the step 160 of Fig. 1, the zone that is made of the blue sky pixel is classified as the zone, blue sky, and is classified as zone, non-blue sky by the zone that zone, non-blue sky consists of.In the step 140 of Fig. 1, obtain characteristics of image, suppose that eigenwert is v1...v6.In the step 150 of Fig. 1, determine the kind of image according to image feature value v1...v6.For example, the kind of image is confirmed as the blue sky image.Then in the step 720 of Figure 10, image improvement is processed the input picture that is applied in to improve as the blue sky image (for example, Innovative method can so that the blue sky in the image is darker).
Although disclose particular implementation of the present invention, it will be appreciated by those skilled in the art that and to make a change and can not depart from the spirit and scope of the present invention for specific embodiment.Therefore, scope of the present invention is not limited to specific embodiment, and intention is that appended claims contains any and all such application, modification and the embodiment in the scope of the invention.

Claims (37)

1. method of be used for determining the image-region of image comprises step:
Calculating is as the color characteristic of each statistical value of each color component of pixel in the described image;
Described image is carried out wavelet transformation, and calculate each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation as the textural characteristics of adjacent area;
By described color and textural characteristics are applied the value of the confidence that decision function calculates pixel described in the described image;
Determine the kind of described pixel by described the value of the confidence; And
Determine the image-region of the pixel of identical type.
2. method according to claim 1, wherein the described pixel in the image is each pixel in the described image or the pixel selected from described image.
3. method according to claim 1, the described step of wherein calculating color characteristic comprises the color characteristic that calculates the described pixel in the described adjacent area in addition.
4. method according to claim 1, wherein the described statistical value of each color component comprises in average, standard variance value and the covariance value of each color component at least one.
5. method according to claim 1, the statistical value of wherein said gray-scale value comprises the average of gray-scale value and at least one of standard variance value.
6. method according to claim 1, wherein said decision function is determined by the Adaboost algorithm.
7. method that is used for classifying the image as a plurality of kinds comprises step:
Calculating is as the color characteristic of each statistical value of each color component of pixel in the described image;
Described image is carried out wavelet transformation, calculate each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation as the textural characteristics of adjacent area;
By described color and textural characteristics are applied the value of the confidence that decision function calculates pixel described in the described image;
Described the value of the confidence and positional information by pixel in the described image are determined characteristics of image; And
By described characteristics of image described Images Classification is become described a plurality of kind.
8. method according to claim 7, wherein the described pixel in the image is each pixel in the described image or the pixel selected from described image.
9. method according to claim 7, the described step of wherein calculating color characteristic comprises the color characteristic that calculates the described pixel in the described adjacent area in addition.
10. method according to claim 7, wherein the described statistical value of each color component comprises in average, standard variance value and the covariance value of each color component at least one.
11. method according to claim 7, the statistical value of wherein said gray-scale value comprise the average of gray-scale value and at least one of standard variance value.
12. method according to claim 7, wherein said decision function is determined by the Adaboost algorithm.
13. method according to claim 7, the described step of wherein by described characteristics of image the kind of image being classified comprise in addition described characteristics of image is applied decision function.
14. a method that is used for the kind of definite image comprises step:
Calculating is as the color characteristic of each statistical value of each color component of pixel in the described image;
Described image is carried out wavelet transformation, and calculate each each statistical value of gray-scale value in a plurality of zones that obtain by described wavelet transformation as the textural characteristics of adjacent area;
By described color and textural characteristics are applied the value of the confidence that decision function calculates pixel described in the described image; And
Determine the kind of described image based on the described the value of the confidence of pixel in the described image.
15. method according to claim 15, wherein pixel described in the image is each pixel in the described image or the pixel selected from described image.
16. method according to claim 15, the described step of wherein calculating color characteristic comprises the color characteristic that calculates described pixel in the described adjacent area in addition.
17. method according to claim 15, wherein the described statistical value of each color component comprises in average, standard variance value and the covariance value of each color component at least one.
18. method according to claim 15, the statistical value of wherein said gray-scale value comprise the average of gray-scale value and at least one of standard variance value.
19. method according to claim 15, wherein said decision function is determined by the Adaboost algorithm.
20. an equipment that is used for the image-region of definite image comprises:
Calculating is as the module of the color characteristic of each statistical value of each color component of pixel in the described image;
Described image is carried out wavelet transformation, and calculate by each each statistical value of gray-scale value in a plurality of zones of described wavelet transformation acquisition as the module of the textural characteristics of adjacent area;
By described color and textural characteristics are applied the module that decision function calculates the value of the confidence of pixel described in the described image;
Determine the module of the kind of pixel by described the value of the confidence; And
Determine the module of image-region of the pixel of identical type.
21. equipment according to claim 20, wherein the described pixel in the image is each pixel in the described image or the pixel selected from described image.
22. equipment according to claim 20, the module of wherein calculating color characteristic is also calculated the color characteristic of the described pixel in the described adjacent area.
23. equipment according to claim 20, wherein the described statistical value of each color component comprises in average, standard variance value and the covariance value of each color component at least one.
24. equipment according to claim 20, the statistical value of wherein said gray-scale value comprise the average of gray-scale value and at least one of standard variance value.
25. equipment according to claim 20, wherein said decision function is determined by the Adaboost algorithm.
26. an equipment that is used for classifying the image as a plurality of kinds comprises:
Calculating is as the module of the color characteristic of each statistical value of each color component of pixel in the described image;
Described image is carried out wavelet transformation, and calculate by each each statistical value of gray-scale value in a plurality of zones of described wavelet transformation acquisition as the module of the textural characteristics of adjacent area;
By described color and textural characteristics are applied the module that decision function calculates the value of the confidence of pixel described in the described image;
The module of determining characteristics of image by the value of the confidence and the positional information of pixel in the described image; And
Described Images Classification is become the module of described a plurality of kinds by described characteristics of image.
27. equipment according to claim 26, wherein the described pixel in the image is each pixel in the described image or the pixel selected from described image.
28. equipment according to claim 26, the module of wherein calculating color characteristic is also calculated the color characteristic of the described pixel in the described adjacent area.
29. equipment according to claim 26, wherein the described statistical value of each color component comprises in average, standard variance value and the covariance value of each color component at least one.
30. equipment according to claim 26, the statistical value of wherein said gray-scale value comprise the average of gray-scale value and at least one of standard variance value.
31. equipment according to claim 26, wherein said decision function is determined by the Adaboost algorithm.
32. an equipment that is used for the kind of definite image comprises:
Calculating is as the module of the color characteristic of each statistical value of each color component of pixel in the described image;
Described image is carried out wavelet transformation, and calculate by each each statistical value of gray-scale value in a plurality of zones of described wavelet transformation acquisition as the module of the textural characteristics of adjacent area;
By described color and textural characteristics are applied the module that decision function calculates the value of the confidence of pixel described in the described image; And
Determine the module of the kind of described image based on the described the value of the confidence of pixel in the described image.
33. equipment according to claim 32, wherein the described pixel in the image is each pixel in the described image or the pixel selected from described image.
34. equipment according to claim 32, the module of wherein calculating color characteristic is also calculated the color characteristic of the described pixel in the described adjacent area.
35. equipment according to claim 32, wherein the described statistical value of each color component comprises in average, standard variance value and the covariance value of each color component at least one.
36. equipment according to claim 32, the statistical value of wherein said gray-scale value comprise the average of gray-scale value and at least one of standard variance value.
37. equipment according to claim 32, wherein said decision function is determined by the Adaboost algorithm.
CN 200710136257 2007-07-12 2007-07-12 Method and apparatus for confirming image area and classifying image Expired - Fee Related CN101344928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710136257 CN101344928B (en) 2007-07-12 2007-07-12 Method and apparatus for confirming image area and classifying image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710136257 CN101344928B (en) 2007-07-12 2007-07-12 Method and apparatus for confirming image area and classifying image

Publications (2)

Publication Number Publication Date
CN101344928A CN101344928A (en) 2009-01-14
CN101344928B true CN101344928B (en) 2013-04-17

Family

ID=40246929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710136257 Expired - Fee Related CN101344928B (en) 2007-07-12 2007-07-12 Method and apparatus for confirming image area and classifying image

Country Status (1)

Country Link
CN (1) CN101344928B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010272004A (en) * 2009-05-22 2010-12-02 Sony Corp Discriminating apparatus, discrimination method, and computer program
JP5798371B2 (en) * 2011-05-09 2015-10-21 富士機械製造株式会社 How to create a fiducial mark model template
CN106056549B (en) * 2016-05-26 2018-12-21 广西师范大学 Hidden image restoration methods based on pixel classifications
CN106951916A (en) * 2016-08-31 2017-07-14 惠州学院 A kind of Potato Quality stage division based on multiresolution algorithm and Adaboost algorithm
CN106886783B (en) * 2017-01-20 2020-11-10 清华大学 Image retrieval method and system based on regional characteristics

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1317673C (en) * 2004-03-18 2007-05-23 致伸科技股份有限公司 System and method for distinguishing words and graphics in an image using neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1317673C (en) * 2004-03-18 2007-05-23 致伸科技股份有限公司 System and method for distinguishing words and graphics in an image using neural network

Also Published As

Publication number Publication date
CN101344928A (en) 2009-01-14

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
DE112016005059B4 (en) Subcategory-aware convolutional neural networks for object detection
Ruta et al. Real-time traffic sign recognition from video by class-specific discriminative features
CN105046196B (en) Front truck information of vehicles structuring output method based on concatenated convolutional neutral net
CN101401101B (en) Methods and systems for identification of DNA patterns through spectral analysis
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN110689091B (en) Weak supervision fine-grained object classification method
CN106126585B (en) The unmanned plane image search method combined based on quality grading with perceived hash characteristics
CN104680127A (en) Gesture identification method and gesture identification system
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN102968637A (en) Complicated background image and character division method
CN102663401B (en) Image characteristic extracting and describing method
CN103020971A (en) Method for automatically segmenting target objects from images
CN101344928B (en) Method and apparatus for confirming image area and classifying image
CN112016605A (en) Target detection method based on corner alignment and boundary matching of bounding box
CN103106265A (en) Method and system of classifying similar images
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
CN110929746A (en) Electronic file title positioning, extracting and classifying method based on deep neural network
CN111126401A (en) License plate character recognition method based on context information
CN112215079B (en) Global multistage target tracking method
CN113205026A (en) Improved vehicle type recognition method based on fast RCNN deep learning network
Yang et al. Semi-automatic ground truth generation for chart image recognition
CN110633635A (en) ROI-based traffic sign board real-time detection method and system
CN111832569B (en) Wall painting pigment layer falling disease labeling method based on hyperspectral classification and segmentation
CN107368847B (en) Crop leaf disease identification method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130417

Termination date: 20170712