CN102903099A - Color image edge detection method based on directionlet conversion - Google Patents
Color image edge detection method based on directionlet conversion Download PDFInfo
- Publication number
- CN102903099A CN102903099A CN2012103255446A CN201210325544A CN102903099A CN 102903099 A CN102903099 A CN 102903099A CN 2012103255446 A CN2012103255446 A CN 2012103255446A CN 201210325544 A CN201210325544 A CN 201210325544A CN 102903099 A CN102903099 A CN 102903099A
- Authority
- CN
- China
- Prior art keywords
- component
- edge
- edges
- image
- directionlet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a color image edge detection method based on directionlet conversion. The problems that a traditional color edge detection method is not capable of extracting complete color edge information and is not sensitive to directional edges are solved. The method comprises steps of inputting a red green blue (RGB) color image and converting the RGB color image to chromaticity saturation luminance (HSI) space; eliminating the effect of 2 pi properties of an H component; constructing transformation matrixes and obtaining directionlet transformation coefficients of all component images; calculating gradient vector module values and phase angles of all pixels; obtaining edges of component images by using a module maximum value method and an eight neighborhood connection method; conducting weighting to edges which are obtained through directionlet conversion of three transformation matrixes and obtaining edges of all components; summing absolute values of edges of two quadrature components of the H component and obtaining edges of the H component; and fusing edges of components of HIS and obtaining final edges. The method has the advantages of being good in directional selectivity, capable of extracting color edges completely and accurately and applied to color image edge detection.
Description
Technical field
The invention belongs to technical field of image processing, relate to a kind of color image edge detection method based on the directionlet conversion.The method can be used for the digital picture pre-service in the fields such as To Carry Out Forest resource exploration, medical image, uranology image
Background technology
The edge is the place that more or less there is sudden change in the information such as gray level or structure in the image, is the end in a zone, also is the beginning in another zone.The elementary contour of object also can be sketched the contours of in the edge when transitive graph is as bulk information, so rim detection is very important concerning computer vision system, it plays a very important role at aspect tools such as image segmentation, pattern-recognition, computer visions.Traditional edge detecting technology, as: Wavelet Modulus Maxima Algorithm, method of differential operator, mathematics morphology all are based on gray level image and carry out.But coloured image provides the information more abundanter than gray level image, and nearly 90% edge is identical with edge in the gray level image in coloured image.Therefore, color images edge detection seems particularly important.
At present, most of color image edge detection method is all realized at rgb space, namely first R, G, 3 components of B are adopted respectively Gray Image Edge Detection Method, then use certain logical algorithm with the edge junction of 3 components altogether, obtain the edge of coloured image.But because the correlativity of R, G, three components of B is very strong, for example when light changed, three components will change simultaneously.Therefore, testing result be converted to testing result behind the gray level image change little, all can lost part colour edging information.In rgb space, the method of another kind of comparative maturity is vector space method, its main thought is that each pixel in the image is regarded as a trivector in the rgb space, and the view picture coloured image just can be regarded as the vector field of a multistep implicit like this.The major defect of rgb space is its unevenness and non-intuitive, namely the distance between two color dot is not equal to two color-aware differences between the color, and we can not directly estimate the perception properties such as colourity, saturation degree and brightness of color from RGB numerical value.And traditional edge detection method is such as Wavelet Modulus Maxima Algorithm and mathematics morphology, because the restriction of filtering direction in rgb space, can not catch the direction marginal information well.
Summary of the invention
The object of the invention is to propose a kind of color image edge detection method based on the directionlet conversion for above-mentioned deficiency of the prior art, to improve colour edging and the direction rim detection precision of coloured image.
The technical scheme that realizes the object of the invention is: the RGB coloured image is transformed into the HSI space; Eliminate 2 π properties influence of its H component; Two quadrature components of H component and saturation degree S component, brightness I component are carried out respectively the directionlet conversion and extract the edge; By two quadrature component edges are asked absolute value and, obtain colourity H component edge; Again the three-component edge of HSI is closed and obtain final edge.Concrete steps comprise as follows:
(1) input one width of cloth RGB coloured image, and be converted to the HSI space, obtain colourity H, saturation degree S, brightness I three-component:
Saturation degree:
Brightness:
Colourity:
Wherein R is that red component, G are that green component, B are blue component, and θ is radian,
(2) 2 π properties influence of elimination colourity H component;
2a) from the H component, produce two quadrature component M, Q, thus the H component from 2 π spatial mappings to linear space:
M=cos(H),Q=sin(H);
2b) respectively M, Q component are carried out edge extracting, and ask its absolute value and, can obtain the edge of H component, to eliminate 2 π properties influence of H component, that is:
G(H)=|G(M)|+|G(Q)|,
Wherein G (H), G (M), G (Q) are respectively the edge of described H, M, Q component;
(3) tectonic transition matrix, and to quadrature component M, the Q of colourity H component, saturation degree S component, brightness I component carry out respectively the directionlet conversion, obtain three groups of directionlet conversion coefficients of each component:
D wherein
1For changing direction d
2Be the formation direction, j is for decomposing the number of plies, and x, y are the pixel coordinate, and n is the component kind;
(4) utilize the directionlet conversion coefficient
Calculate the gradient mode value Mf of pixel in each component image
n(j, x, y)
kWith phase angle Af
n(j, x, y)
k:
Wherein k gets 1,2, and 3;
(5) adopt modulus maximum that component image is carried out rim detection, and carry out Multiscale Fusion by eight neighborhood connection methods, obtain the edge of component image;
(6) will carry out component image edge that the directionlet conversion obtains by three transformation matrixs is weighted and obtains each component edge;
(7) the absolute value sum is asked at the edge of two quadrature component M, Q of colourity H component, obtained the edge of colourity H component;
(8) colourity H, saturation degree S, brightness I three-component edge are merged obtain final edge.
The present invention has the following advantages compared with prior art:
The first, the present invention selects to meet the HSI space of human visual system, can effectively remove 2 π properties influence of H component, overcomes the shortcoming that prior art can not effectively be utilized chromatic information and be subjected to H component 2 π properties influence, realizes the accurate extraction to colour edging information.
Second, the present invention adopts the directionlet conversion to carry out rim detection, because the directionlet conversion can be carried out along any two reasonable directions, overcome prior art because the restriction of filtering direction can not be extracted the shortcoming at complete directivity edge, can effectively extract the directivity edge by a plurality of filtering directional combinations.
Description of drawings
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the respectively filtering direction groups of correspondence of three transformation matrixs;
Fig. 3 is the respectively coset conversion processes of correspondence of three transformation matrixs;
Fig. 4 for the present invention and existing rgb space based on the Wavelet Modulus Maxima edge detection method, at rgb space based on the Morphology edge detection method, to the experimental result comparison diagram of house image;
Fig. 5 for the present invention and existing rgb space based on the Wavelet Modulus Maxima edge detection method, at rgb space based on the Morphology edge detection method, to the experimental result comparison diagram of lena image.
Embodiment
Be described in further detail below in conjunction with 1 pair of step of the present invention of accompanying drawing.
Saturation degree:
Brightness:
Colourity:
Wherein R is that red component, G are that green component, B are blue component, and θ is radian,
2a) from the H component, produce two quadrature component M, Q, thus the H component from 2 π spatial mappings to linear space:
M=cos(H),Q=sin(H);
2b) respectively M, Q component are carried out edge extracting, and ask its absolute value and, can obtain the edge of H component, to eliminate 2 π properties influence of H component, that is:
G(H)=|G(M)|+|G(Q)|,
Wherein G (H) is the edge of colourity H component, and G (M), G (Q) are respectively the edge of two quadrature components of described H component.
3a) selected transform direction and formation direction, tectonic transition matrix form M
Λ:
Wherein, vectorial d
1=[a
1, b
1], its direction is for changing direction, and slope is b
1/ a
1Vector d
2=[a
2, b
2], its direction is the formation direction, slope is b
2/ a
2Z is integer field;
3b) from transformation matrix formula M
ΛIn choose three transformation matrixs and be:
Filtering direction group corresponding to these three transformation matrixs as shown in Figure 2, wherein:
Filtering direction group and transformation matrix shown in Fig. 2 (a)
It is corresponding,
Filtering direction group and transformation matrix shown in Fig. 2 (b)
It is corresponding,
Filtering direction group and transformation matrix shown in Fig. 2 (c)
Corresponding.
3c) by transformation matrix formula M
ΛImage is carried out the coset conversion, obtains | det (M
Λ) | individual coset; Above-mentioned three transformation matrixs to the coset conversion process of image I (x, y) as shown in Figure 3, wherein:
Fig. 3 (a) is depicted as and transformation matrix
Corresponding coset conversion process, obtaining coset is P (x, y)
1,
Fig. 3 (b) is depicted as and transformation matrix
Corresponding coset conversion process, obtaining coset is P (x, y)
2,
Fig. 3 (c) is depicted as and transformation matrix
Corresponding coset conversion process, obtaining coset is P (x, y)
3
4c) to coset P (x, y)
kCarry out the one dimension undecimated wavelet transform along the horizontal and vertical direction, equivalence is the d that changed direction in the original image edge
1With formation direction d
2Carry out conversion, obtain the directionlet conversion coefficient:
D wherein
1For changing direction d
2Be the formation direction, j is for decomposing the number of plies, and x, y are the pixel coordinate, and n is the component kind, and k gets 1,2,3; | det (M
Λ) | be transform matrix M
ΛThe absolute value of order.
Step 5 adopts modulus maximum that component image is carried out rim detection, and carries out Multiscale Fusion by eight neighborhood connection methods, obtains the edge of component image.
5a) modulus maximum carries out rim detection, step is as follows: the gradient phase angle is divided into eight directions according to the adjacent position, when-π/8<phase angle<π/8, detect according to horizontal direction, when π/8<phase angle<3 π/8 according to detecting from the horizontal by 45 ° of directions, when-3 π/8<phase angle<-during π/8 according to detecting with-45 ° of directions, when phase angle<-3 π/8 or phase angle〉during 3 π/8, detect according to the direction vertical with horizontal direction.So just at mould image M f
n(j, x, y)
kMiddle Local modulus maxima of trying to achieve respectively mould along phase angular direction separately obtains candidate's outline map.Setting threshold T
h, pixel value in candidate's edge image is labeled as marginal point greater than the pixel of threshold value with it.
5b) for the edge image of different scale, adopt eight neighborhood connection methods, carry out Multiscale Fusion, namely take the out to out edge as template, if certain on the inferior large scale edge a bit is in eight neighborhoods of out to out marginal point, then this point is integrated in the out to out edge; Successively other yardsticks are integrated in the out to out edge from big to small by yardstick, obtain at last the Multiscale Fusion edge.
Step 6 will be carried out component image edge that the directionlet conversion obtains by different transformation matrixs and is weighted and obtain each component edge.
Step 7 is asked the absolute value sum to the edge of two quadrature component M, Q of colourity H component, obtains the edge of colourity H component.
G(H)=|G(M)|+|G(Q)|,
Wherein G (H), G (M), G (Q) are respectively the edge of described H, M, Q component;
Step 8 merges colourity H, saturation degree S, brightness I three-component edge and obtains final edge.
Edge amalgamation method is: three width of cloth component edge images all are converted into bianry image, then carry out image or addition, obtain final edge.
Effect of the present invention further specifies by following emulation.
One, simulated conditions
Adopt standard RGB coloured image House256 commonly used in the Image Edge-Detection * 256 and Lena256 * 256 images, be used in respectively rgb space based on the Wavelet Modulus Maxima edge detection method, carry out the emulation rim detection at rgb space based on Morphology edge detection method and edge detection method of the present invention.
Two, the emulation content results is analyzed
Fig. 4 (a) is test pattern House, and Fig. 4 (b) is at the edge detection results figure of rgb space based on wavelet modulus maximum method; Fig. 4 (c) is at the edge detection results figure of rgb space based on Mathematical Morphology Method; Fig. 4 (d) is the inventive method edge detection results figure.
Can find out from Fig. 4 (a) and Fig. 4 (b), have the edge that extracts based on wavelet modulus maximum method at rgb space now, breakpoint appears in eaves hacures place, and the broken string phenomenon appears in the right window edge;
Can find out from Fig. 4 (c), have the edge that extracts based on Mathematical Morphology Method at rgb space now, between two windows a large amount of assorted points be arranged, the broken string phenomenon appears in the right window, and the edge, subregion is excessively thick;
Can find out from Fig. 4 (d), the edge that the inventive method is extracted, edge integrity is relatively good, obvious breakpoint do not occur; Assorted point do not occur, and the edge thickness is moderate; The rim detection effect is better.
Fig. 5 (a) is test pattern House, and Fig. 5 (b) is at the edge detection results figure of rgb space based on wavelet modulus maximum method; Fig. 5 (c) is at the edge detection results figure of rgb space based on Mathematical Morphology Method; Fig. 5 (d) is the inventive method edge detection results figure.
Can find out from Fig. 5 (a) and Fig. 5 (b), have the edge that extracts based on wavelet modulus maximum method at rgb space now, Lena above-head edge does not extract complete; More assorted point is arranged, and the broken string phenomenon appears in shade.
Can find out from Fig. 5 (c), have the edge that extracts based on Mathematical Morphology Method at rgb space now, Lena above-head edge does not extract complete, more assorted point is arranged, and the edge, subregion is excessively thick.
Can find out from Fig. 5 (d), the edge that the inventive method is extracted, edge integrity is relatively good, obvious breakpoint do not occur; Assorted point do not occur, and the edge thickness is moderate, the edge that obtains is more level and smooth; The rim detection effect is better.
To sum up, the present invention can detect the strong place of luminance transformation preferably, be subjected to illumination effect less, and owing to adopting the directionlet conversion to carry out edge extracting, can capture better the image direction marginal information, and the edge that extracts is more level and smooth, can realize preferably the rim detection of coloured image.
Claims (3)
1. the color image edge detection method based on the directionlet conversion comprises the steps:
(1) input one width of cloth RGB coloured image, and be converted to the HSI space, obtain colourity H, saturation degree S, brightness I three-component:
Saturation degree:
Brightness:
Colourity:
Wherein R is that red component, G are that green component, B are blue component, and θ is radian,
(2) 2 π properties influence of elimination colourity H component;
2a) from the H component, produce two quadrature component M, Q, thus the H component from 2 π spatial mappings to linear space:
M=cos(H),Q=sin(H);
2b) respectively M, Q component are carried out edge extracting, and ask its absolute value and, can obtain the edge of H component, to eliminate 2 π properties influence of H component, that is:
G(H)=|G(M)|+|G(Q)|,
Wherein G (H), G (M), G (Q) are respectively the edge of described H, M, Q component;
(3) tectonic transition matrix, and to quadrature component M, the Q of colourity H component, saturation degree S component, brightness I component carry out respectively the directionlet conversion, obtain three groups of directionlet conversion coefficients of each component:
D wherein
1For changing direction d
2Be the formation direction, j is for decomposing the number of plies, and x, y are the pixel coordinate, and n is the component kind;
(4) utilize the directionlet conversion coefficient
Calculate the gradient mode value Mf of pixel in each component image
n(j, x, y)
kWith phase angle Af
n(j, x, y)
k:
Wherein k gets 1,2, and 3;
(5) adopt modulus maximum that component image is carried out rim detection, and carry out Multiscale Fusion by eight neighborhood connection methods, obtain the edge of component image;
(6) will carry out component image edge that the directionlet conversion obtains by three transformation matrixs is weighted and obtains each component edge;
(7) the absolute value sum is asked at the edge of two quadrature component M, Q of colourity H component, obtained the edge of colourity H component;
(8) colourity H, saturation degree S, brightness I three-component edge are merged obtain final edge.
2. described color image edge detection method based on the directionlet conversion according to claim 1, it is characterized in that: the described tectonic transition matrix of step (3), quadrature component M, Q to colourity H component, saturation degree S component, brightness I component carry out respectively the directionlet conversion, carry out as follows:
3a) selected transform direction and formation direction, the tectonic transition matrix M
Λ:
Wherein, vectorial d
1=[a
1, b
1], its direction is for changing direction, and slope is b
1/ a
1Vector d
2=[a
2, b
2], its direction is the formation direction, slope is b
2/ a
2Z is integer field;
3b) choosing three transformation matrixs from the transformation matrix formula is:
3c) by three transformation matrixs choosing image I (x, y) is carried out the coset conversion, the coset that obtains is respectively P (x, y)
1, P (x, y)
2, P (x, y)
3
3d) to coset P (x, y)
kCarry out the one dimension undecimated wavelet transform along the horizontal and vertical direction, equivalence is the d that changed direction in the original image edge
1With formation direction d
2Carry out conversion, obtain the directionlet conversion coefficient:
Wherein j is for decomposing the number of plies, and x, y are the pixel coordinate, and n is the classification kind, and k gets 1,2,3; | det (M
Λ) | be transform matrix M
ΛThe absolute value of order.
3. the color image edge detection method based on the directionlet conversion according to claim 1, it is characterized in that: carry out Multiscale Fusion by eight neighborhood connection methods in the described step (5), as template take the out to out edge, if certain on the inferior large scale edge a bit is in eight neighborhoods of out to out marginal point, then this point is integrated in the out to out edge; Successively other yardsticks are integrated in the out to out edge from big to small by yardstick, obtain at last the Multiscale Fusion edge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012103255446A CN102903099A (en) | 2012-09-05 | 2012-09-05 | Color image edge detection method based on directionlet conversion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012103255446A CN102903099A (en) | 2012-09-05 | 2012-09-05 | Color image edge detection method based on directionlet conversion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102903099A true CN102903099A (en) | 2013-01-30 |
Family
ID=47575312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012103255446A Pending CN102903099A (en) | 2012-09-05 | 2012-09-05 | Color image edge detection method based on directionlet conversion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102903099A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108711160A (en) * | 2018-05-18 | 2018-10-26 | 西南石油大学 | A kind of Target Segmentation method based on HSI enhancement models |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719268A (en) * | 2009-12-04 | 2010-06-02 | 西安电子科技大学 | Generalized Gaussian model graph denoising method based on improved Directionlet region |
CN102142133A (en) * | 2011-04-19 | 2011-08-03 | 西安电子科技大学 | Mammary X-ray image enhancement method based on non-subsampled Directionlet transform and compressive sensing |
-
2012
- 2012-09-05 CN CN2012103255446A patent/CN102903099A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719268A (en) * | 2009-12-04 | 2010-06-02 | 西安电子科技大学 | Generalized Gaussian model graph denoising method based on improved Directionlet region |
CN102142133A (en) * | 2011-04-19 | 2011-08-03 | 西安电子科技大学 | Mammary X-ray image enhancement method based on non-subsampled Directionlet transform and compressive sensing |
Non-Patent Citations (2)
Title |
---|
JING BAI,HUAJI ZHOU: "Edge detection approach based on directionlet transform", 《2011 INTERNATIONAL CONFERENCE ON MULTIMEDIA TECHNOLOGY (ICMT)》, 28 July 2011 (2011-07-28), pages 3512 - 3515, XP032042140, DOI: doi:10.1109/ICMT.2011.6001940 * |
高丽,令晓明: "一种新的彩色有噪图像边缘检测方法", 《自动化与仪器仪表》, no. 153, 31 December 2011 (2011-12-31), pages 86 - 88 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108711160A (en) * | 2018-05-18 | 2018-10-26 | 西南石油大学 | A kind of Target Segmentation method based on HSI enhancement models |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106485275B (en) | A method of realizing that cover-plate glass is bonded with liquid crystal display positioning | |
CN105354865B (en) | The automatic cloud detection method of optic of multispectral remote sensing satellite image and system | |
CN106056155B (en) | Superpixel segmentation method based on boundary information fusion | |
CN103927741B (en) | SAR image synthesis method for enhancing target characteristics | |
CN104537625A (en) | Bayer color image interpolation method based on direction flag bits | |
CN103186904B (en) | Picture contour extraction method and device | |
CN104134200B (en) | Mobile scene image splicing method based on improved weighted fusion | |
CN102930534B (en) | Method for automatically positioning acupuncture points on back of human body | |
CN107818303B (en) | Unmanned aerial vehicle oil and gas pipeline image automatic contrast analysis method, system and software memory | |
CN106296638A (en) | Significance information acquisition device and significance information acquisition method | |
CN105913407B (en) | A method of poly focal power image co-registration is optimized based on differential chart | |
CN103927758B (en) | Saliency detection method based on contrast ratio and minimum convex hull of angular point | |
CN103679145A (en) | Automatic gesture recognition method | |
CN109978871B (en) | Fiber bundle screening method integrating probability type and determination type fiber bundle tracking | |
CN104850850A (en) | Binocular stereoscopic vision image feature extraction method combining shape and color | |
CN104217440B (en) | A kind of method extracting built-up areas from remote sensing images | |
CN104240256A (en) | Image salient detecting method based on layering sparse modeling | |
CN108596975A (en) | A kind of Stereo Matching Algorithm for weak texture region | |
CN104835175A (en) | Visual attention mechanism-based method for detecting target in nuclear environment | |
CN106355607B (en) | A kind of width baseline color image template matching method | |
CN106295491B (en) | Lane line detection method and device | |
CN104966285A (en) | Method for detecting saliency regions | |
CN104658003A (en) | Tongue image segmentation method and device | |
CN105741276A (en) | Ship waterline extraction method | |
CN104599288A (en) | Skin color template based feature tracking method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C05 | Deemed withdrawal (patent law before 1993) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130130 |