CN102117410B - Allogenetic image thick edge detection method based on force field transformation - Google Patents

Allogenetic image thick edge detection method based on force field transformation Download PDF

Info

Publication number
CN102117410B
CN102117410B CN2011100652020A CN201110065202A CN102117410B CN 102117410 B CN102117410 B CN 102117410B CN 2011100652020 A CN2011100652020 A CN 2011100652020A CN 201110065202 A CN201110065202 A CN 201110065202A CN 102117410 B CN102117410 B CN 102117410B
Authority
CN
China
Prior art keywords
image
pixel
size
point
force
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2011100652020A
Other languages
Chinese (zh)
Other versions
CN102117410A (en
Inventor
曹传东
徐贵力
赵妍
王彪
叶永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN2011100652020A priority Critical patent/CN102117410B/en
Publication of CN102117410A publication Critical patent/CN102117410A/en
Application granted granted Critical
Publication of CN102117410B publication Critical patent/CN102117410B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an allogenetic image thick edge detection method based on force field transformation. The method comprises the following steps of firstly, calculating the size and the direction of resultant force applied on each pixel point in an image according to the gravity concept, secondly, conducting normalization processing on the size of the resultant force applied on the pixel points in the image for removing the influence caused by the different gray level distribution of the allogenetic image, then conducting binaryzation division on the normalized image so as to obtain an area where the edge pixel points are positioned, and finally, obtaining the determination method of the thick edge points by studying the size and the direction of the resultant force of the thick edge pixel points through experiments. The method realizes the detection on the thick edges among the allogenetic images, better extracts the common characteristics among the allogenetic images and lays a foundation for the matching among the allogenetic images.

Description

The thick edge detection method of allos image based on field of force conversion
Technical field
The present invention relates to image processing field, the present invention relates to the thick rim detection of different resolution allos image such as visible light, infrared, SAR image more precisely.
Background technology
At present; Also exist a lot of difficulties in the research of allos images match; Particularly, be difficult to obtain the common feature of image on characteristics such as gray scale, brightness, color owing to allos image imaging mechanism difference big (like optics and SAR image), wave band difference big (like visible light and LONG WAVE INFRARED image).According to the analysis of allos image imaging principle and typical allos image is found, in the allos image between object thick edge be the characteristic that relatively has general character between the allos image.These thick edges are key characters of interesting target, the most information of often carrying image, and these edges can provide the position and the shape of target, for people's description or recognition objective and decipher image provide important characteristic information.Edge detection algorithm commonly used has gradient method (Robert, Prewitt, Sobel and Canny operator), template matching method and transform domain method etc.
Previous patent 200810102845.6 provides a kind of color image edge detection method.This patent comprises informative characteristics according to coloured image, at first coloured image is mapped to different color spaces; Respectively the image in each color space is carried out rim detection then; At last the testing result in each color space is merged and obtain final edge detection results.But the method for this patent only is suitable for the rim detection of coloured image.
Previous patent 201010152357.3 provides a kind of edge detection method based on the fractional-order signal Processing.This patent at first utilizes fractional-order differentiate algorithm that each pixel in the image is carried out the gradient computing, obtains the gradient magnitude of each pixel; Then gradient image being carried out non-maximum value suppresses; Adopt the dual threshold method to judge whether target pixel points is pixel and adjoining edge, obtains final edge detection results at last.The method of this patent has been extracted the edge in the image preferably, but the edge detection results preciosity is difficult to therefrom extract effective thick edge.
Summary of the invention
The purpose of this invention is to provide a kind of thick edge detection method of allos image,, thereby important characteristic information is provided for description or recognition objective and decipher image the thick edge extracting of allos image based on field of force conversion.
The present invention takes following technical scheme to realize:
The thick edge detection method of allos image based on field of force conversion is characterized in that may further comprise the steps:
(1) according to gravitational notion and resolution of force and synthetic, obtains the size and Orientation of making a concerted effort that pixel receives in the image;
(2) according to the suffered size of making a concerted effort of all pixels in the image, the suffered size of making a concerted effort of each pixel is calculated in normalization;
(3) image after the normalization is carried out binary conversion treatment, obtain the area image at edge pixel point and neighborhood thereof place;
(4) obtain final thick marginal point based on direction of making a concerted effort and size.
The aforesaid thick edge detection method of allos image based on field of force conversion is characterized in that: in said step (1), the calculating that pixel receives the size and Orientation of making a concerted effort in the image may further comprise the steps:
(1) the calculating pixel point receives in the neighborhood gravitation of any in addition, and based on the direction of gravitation gravitation is decomposed along level and vertical axis;
(2) whole other gravitation component summation on level and Z-axis that this point is received;
(3) it is synthetic the component that obtains on level and the Z-axis to be made vector, obtains the size and Orientation of making a concerted effort that this point finally receives.
The aforesaid thick edge detection method of allos image based on field of force conversion, it is characterized in that: image is counted as a field of force, and the formation in the field of force is the effect that has gravitation between any two pixels in the image through supposing, promptly the image meta is changed to r jPixel to receive the position be r iThe graviational interaction F of pixel i(r j), the size of gravitation and pixel r iGray-scale value be directly proportional, with pixel r iPoint and pixel r jSquare being inversely proportional to of distance between point; The direction of gravitation is the line direction of point-to-point transmission.Concrete vector representation is following:
F i ( r j ) = I ( r i ) r i - r j | r i - r j | 3 - - - ( 1 )
Wherein, I (r i) remarked pixel point r iGray-scale value; r i-r jLine direction indication F i(r j) direction vector; | r i-r j| represent the distance between two pixels.Pixel r jReceive can being expressed as with joint efforts of all pixels:
F ( r j ) = Σ i = 1 , ≠ j N F i ( r j ) = Σ i = 1 , ≠ j N I ( r i ) r i - r j | r i - r j | 3 - - - ( 2 )
Wherein, N remarked pixel point r iThe number of pixel in the neighborhood; F (r j) direction be F i(r j) resultant direction.
The aforesaid thick edge detection method of allos image based on field of force conversion is characterized in that: in described step (2), according to the suffered size of making a concerted effort of all pixels in the image, the suffered size of making a concerted effort of each pixel is calculated in normalization.Because the image-forming principle of allos gray distribution of image or imaging device is different, can know that according to the principle of the field of force replacement theory size in the field of force is also inequality.For the influence big or small of the image-forming principle difference of removing intensity profile or imaging device to the field of force, after calculating the suffered size of making a concerted effort of each pixel, it is carried out the normalization processing, the region, edge of different images to be appeared suddenly out, the normalization formula is:
f ′ ( r j ) = f ( r j ) - f min f max - f min × 255 - - - ( 3 )
Wherein, f (r j) remarked pixel point r jReceive the size of making a concerted effort, f Max, f MinRepresent all pixels are made a concerted effort in the piece image maximal value and minimum value respectively, f ' (r j) be pixel r after the normalization jReceive the size of making a concerted effort.
The aforesaid thick edge detection method of allos image based on field of force conversion is characterized in that: in described step (4), in the discrete representation of its resultant direction, under the more situation of discrete magnitude number, can improve accuracy of detection; Under the less situation of discrete magnitude number, can improve the ability that suppresses noise, take into account accurate positioning and the requirement that suppresses noise, described field of force direction indication is 8 discrete magnitudes.
The aforesaid thick edge detection method of allos image based on field of force conversion is characterized in that: in described step (4), obtain final thick marginal point according to direction of making a concerted effort and size.Promptly when pixel is in thick edge, the grey scale pixel value basically identical in this direction indication zone, field of force, the grey scale pixel value in field of force reverse direction indication zone is basically identical also, but the grey scale pixel value between these two zones is different; When this pixel was in the thick edge of image, be the maximal value on the direction of the field of force suffered the making a concerted effort of this point.Therefore, on the edge of in the area image at pixel place, be expressed as the thick edge of image, obtain final thick marginal point with the pixel at size maximal value place, the field of force on the direction of the field of force.
So far, the complete allos Image Edge-Detection process based on field of force conversion is finished.
The present invention has obtained the thick edge of image, for rim detection provides a new solution thinking and a method preferably according to the size and Orientation characteristic of making a concerted effort of thick edge pixel point.
Description of drawings
Fig. 1 is the allos Edge-Detection Algorithm process flow diagram that the present invention is based on field of force conversion;
Fig. 2 is the synthetic synoptic diagram of power;
Fig. 3 is a field of force directional diagram.
Embodiment
Below in conjunction with embodiment the present invention is done further detailed description.
With reference to Fig. 1, the thick edge detection method of changing based on the field of force of allos image may further comprise the steps:
The first step based on gravitational notion and force resolution and synthetic, obtains the size and Orientation of making a concerted effort that pixel receives in the image;
In second step, according to the suffered size of making a concerted effort of all pixels in the image, the suffered size of making a concerted effort of each pixel is calculated in normalization;
The 3rd step, the image after the normalization is carried out binary conversion treatment, obtain the area image at edge pixel point and neighborhood thereof place;
In the 4th step, obtain final thick marginal point according to direction of making a concerted effort and size.
In order to describe the gray distribution of image situation, image is counted as a field of force, and the formation in the field of force is the effect that has gravitation between any two pixels in the image through supposing, promptly the image meta is changed to r jPixel to receive the position be r iThe graviational interaction F of pixel i(r j), the size of gravitation and pixel r iGray-scale value be directly proportional, with pixel r iWith pixel r jBetween square being inversely proportional to of distance; The direction of gravitation is the line direction of point-to-point transmission.Concrete vector representation is following:
F i ( r j ) = I ( r i ) r i - r j | r i - r j | 3 - - - ( 1 )
Wherein, I (r i) remarked pixel point r iGray-scale value; r i-r jLine direction indication F i(r j) direction vector; | r i-r j| represent the distance between two pixels.Pixel r jReceive can being expressed as with joint efforts of all pixels:
F ( r j ) = Σ i = 1 , ≠ j N F i ( r j ) = Σ i = 1 , ≠ j N I ( r i ) r i - r j | r i - r j | 3 - - - ( 2 )
Wherein, N remarked pixel point r iThe number of pixel in the neighborhood; F (r j) direction be F i(r j) resultant direction.
With reference to Fig. 2, pixel r jSuffered calculating of making a concerted effort may further comprise the steps:
The first step is calculated this point and received in the neighborhood gravitation of any in addition, and according to the direction of gravitation gravitation decomposed along level and Z-axis respectively, and is as shown in Figure 2;
In second step, whole other gravitation component on level and Z-axis that this point receives is sued for peace;
In the 3rd step, it is synthetic that the component that obtains on level and the Z-axis is made vector, obtains the size and Orientation of making a concerted effort that this point finally receives.
Because the image-forming principle of illumination or imaging device is different, make the gray scale that obtains image vary in size, can know that according to the principle of the field of force replacement theory size in the field of force is also inequality.For the influence big or small of the image-forming principle difference of removing illumination or imaging device to the field of force, after calculating the suffered size of making a concerted effort of each pixel, it is carried out the normalization processing, the region, edge of different images is appeared suddenly out.Normalization is suc as formula shown in (3):
f ′ ( r j ) = f ( r j ) - f min f max - f min × 255 - - - ( 3 )
Wherein, f (r j) remarked pixel point r jReceive the size of making a concerted effort, f Max, f MinRepresent all pixels are made a concerted effort in the piece image maximal value and minimum value respectively, f ' (r j) be pixel r after the normalization jReceive the size of making a concerted effort.
In order to cut apart the zone that obtains place, thick edge, employing formula (4) is carried out binaryzation to field of force sized images, (at this moment, T=80).
f bin ( r j ) = 0 , f &prime; ( r j ) < T 255 , f &prime; ( r j ) &GreaterEqual; T - - - ( 4 )
In the formula, T is the threshold value of binary conversion treatment, f Bin(r j) expression binary conversion treatment the result.Pixel r jF (the r that makes a concerted effort that receives j) direction reflected pixel r jThe distribution situation of surrounding pixel.In the discrete representation of field of force direction, under the more situation of discrete magnitude number, can improve accuracy of detection; Under the less situation of discrete magnitude number, can improve the ability that suppresses noise.With reference to Fig. 3, the present invention confirms that the direction in the field of force is 8 discrete magnitudes.
When pixel is in thick edge, the grey scale pixel value basically identical in this direction indication zone, field of force, the grey scale pixel value in field of force reverse direction indication zone is basically identical also, but the grey scale pixel value between these two zones is different.So when this pixel was in the thick edge of image, be the maximal value on the direction of the field of force suffered the making a concerted effort of this point.
Based on the thick edge detection method of image of field of force conversion, according to pixel r jReceive the size and Orientation that other pixels (pixel in N * N neighborhood) are made a concerted effort, make that the thick edge in the image is clearly appeared suddenly out.Promptly on the edge of in the area image at pixel place, be expressed as the thick edge of image, obtain final thick marginal point with the pixel at size maximal value place, the field of force on the direction of the field of force.
In sum, the present invention utilizes gravitational notion to explain the gray distribution of image situation, has made full use of image pixel intensity profile information.Promptly the size and Orientation characteristic of making a concerted effort according to thick edge pixel point has detected the thick edge of allos image preferably, for rim detection provides a new solution thinking and a method, thereby lays a good foundation for the coupling of allos image.
Above-mentioned embodiment does not limit technical scheme of the present invention in any form, and the technical scheme that mode obtained that every employing is equal to replacement or equivalent transformation all drops on protection scope of the present invention.

Claims (3)

1. based on the thick edge detection method of allos image of field of force conversion, it is characterized in that may further comprise the steps:
(1) according to gravitational notion and resolution of force and synthetic, obtains the size and Orientation of making a concerted effort that pixel receives in the image;
(2) according to the suffered size of making a concerted effort of all pixels in the image, the suffered size of making a concerted effort of each pixel is calculated in normalization;
(3) image after the normalization is carried out binary conversion treatment, obtain the area image at edge pixel point and neighborhood thereof place;
(4) obtain final thick marginal point based on direction of making a concerted effort and size,
In said step (1), the calculating that pixel receives the size and Orientation of making a concerted effort in the image may further comprise the steps:
(11) the calculating pixel point receives in the neighborhood gravitation of any in addition, and based on the direction of gravitation gravitation is decomposed along level and vertical axis;
(12) whole other gravitation component summation on level and Z-axis that this point is received;
(13) it is synthetic the component that obtains on level and the Z-axis to be made vector, obtains the size and Orientation of making a concerted effort that this point finally receives,
Image is counted as a field of force, and the formation in the field of force is the effect that has gravitation between any two pixels in the image through supposing, promptly the image meta is changed to r jPixel to receive the position be r iThe graviational interaction F of pixel i(r j), the size of gravitation and pixel r iGray-scale value be directly proportional, with pixel r iWith pixel r jSquare being inversely proportional to of distance between point; The direction of gravitation is the line direction of point-to-point transmission, and concrete vector representation is:
F i ( r j ) = I ( r i ) r i - r j | r i - r j | 3 - - - ( 1 )
Wherein, I (r i) remarked pixel point r iGray-scale value; r i-r jLine direction indication F i(r j) direction vector; | r i-r j| represent the distance between two pixels, pixel r jReceive can being expressed as with joint efforts of all pixels:
F ( r j ) = &Sigma; i = 1 , &NotEqual; j N F i ( r j ) = &Sigma; i = 1 , &NotEqual; j N I ( r i ) r i - r j | r i - r j | 3 - - - ( 2 )
Wherein, N remarked pixel point r iThe number of pixel in the neighborhood; F (r j) direction be F i(r j) resultant direction;
In described step (4), on the edge of in the area image at pixel and neighborhood thereof place, be expressed as the thick edge of image with the pixel at field of force size maximal value place on the direction of the field of force, obtain final thick marginal point.
2. the thick edge detection method of allos image based on field of force conversion according to claim 1, it is characterized in that: in described step (2), the normalization formula is:
f &prime; ( r j ) = f ( r j ) - f min f max - f min &times; 255 - - - ( 3 )
Wherein, f (r j) remarked pixel point r jReceive the size of making a concerted effort, f Max, f MinRepresent all pixels are made a concerted effort in the piece image maximal value and minimum value respectively, f ' (r j) be pixel r after the normalization jReceive the size of making a concerted effort.
3. the thick edge detection method of allos image based on field of force conversion according to claim 1, it is characterized in that: in described step (4), in the discrete representation of its resultant direction, field of force direction indication is 8 discrete magnitudes.
CN2011100652020A 2011-03-17 2011-03-17 Allogenetic image thick edge detection method based on force field transformation Expired - Fee Related CN102117410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100652020A CN102117410B (en) 2011-03-17 2011-03-17 Allogenetic image thick edge detection method based on force field transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100652020A CN102117410B (en) 2011-03-17 2011-03-17 Allogenetic image thick edge detection method based on force field transformation

Publications (2)

Publication Number Publication Date
CN102117410A CN102117410A (en) 2011-07-06
CN102117410B true CN102117410B (en) 2012-08-22

Family

ID=44216171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100652020A Expired - Fee Related CN102117410B (en) 2011-03-17 2011-03-17 Allogenetic image thick edge detection method based on force field transformation

Country Status (1)

Country Link
CN (1) CN102117410B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093478B (en) * 2013-02-18 2015-09-30 南京航空航天大学 Based on the allos image thick edges detection method of quick nuclear space fuzzy clustering

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101214150A (en) * 2007-12-27 2008-07-09 重庆大学 Method for recognizing human ear characteristic by gravitational field conversion algorithm
US7672507B2 (en) * 2004-01-30 2010-03-02 Hewlett-Packard Development Company, L.P. Image processing methods and systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672507B2 (en) * 2004-01-30 2010-03-02 Hewlett-Packard Development Company, L.P. Image processing methods and systems
CN101214150A (en) * 2007-12-27 2008-07-09 重庆大学 Method for recognizing human ear characteristic by gravitational field conversion algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hurley D J等.A new force field transform for ear and face recognition.《Proceedings of the IEEE 2000 International Conference on Image processing》.2000,第1卷25-28. *
朱海华等.基于图像力场转换的耳廓图像识别.《自动化学报》.2006,第32卷(第4期),512-519. *

Also Published As

Publication number Publication date
CN102117410A (en) 2011-07-06

Similar Documents

Publication Publication Date Title
CN108830819B (en) Image fusion method and device for depth image and infrared image
CN102819740B (en) A kind of Single Infrared Image Frame Dim targets detection and localization method
CN102789640A (en) Method for fusing visible light full-color image and infrared remote sensing image
Nemoto et al. Building change detection via a combination of CNNs using only RGB aerial imageries
CN103745216B (en) A kind of radar image clutter suppression method based on Spatial characteristic
CN101980287B (en) Method for detecting image edge by nonsubsampled contourlet transform (NSCT)
CN101739549B (en) Face detection method and system
CN103177458A (en) Frequency-domain-analysis-based method for detecting region-of-interest of visible light remote sensing image
CN104851086A (en) Image detection method for cable rope surface defect
CN106127205A (en) A kind of recognition methods of the digital instrument image being applicable to indoor track machine people
CN104951799A (en) SAR remote-sensing image oil spilling detection and identification method
CN102789578A (en) Infrared remote sensing image change detection method based on multi-source target characteristic support
Hou et al. SAR image ship detection based on visual attention model
CN103093478A (en) Different source image rough edge test method based on rapid nuclear spatial fuzzy clustering
CN104517095A (en) Head division method based on depth image
CN101303728A (en) Method for identifying fingerprint facing image quality
CN103793894A (en) Cloud model cellular automata corner detection-based substation remote viewing image splicing method
CN104123734A (en) Visible light and infrared detection result integration based moving target detection method
CN103824302A (en) SAR (synthetic aperture radar) image change detecting method based on direction wave domain image fusion
CN104182752A (en) Intelligent monitoring method of outdoor advertising board
Jung et al. Rapid and non-invasive surface crack detection for pressed-panel products based on online image processing
JP5578986B2 (en) Weather radar observation information providing system and weather radar observation information providing method
CN102117410B (en) Allogenetic image thick edge detection method based on force field transformation
CN106291550A (en) The polarization SAR Ship Detection of core is returned based on local scattering mechanism difference
CN102800101A (en) Satellite-borne infrared remote sensing image airport ROI rapid detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120822

Termination date: 20140317