CN103198319B - For the blurred picture Angular Point Extracting Method under the wellbore environment of mine - Google Patents
For the blurred picture Angular Point Extracting Method under the wellbore environment of mine Download PDFInfo
- Publication number
- CN103198319B CN103198319B CN201310124386.2A CN201310124386A CN103198319B CN 103198319 B CN103198319 B CN 103198319B CN 201310124386 A CN201310124386 A CN 201310124386A CN 103198319 B CN103198319 B CN 103198319B
- Authority
- CN
- China
- Prior art keywords
- point
- points
- corner
- angular
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000000605 extraction Methods 0.000 claims abstract description 19
- 238000001514 detection method Methods 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 8
- 102000000584 Calmodulin Human genes 0.000 abstract 1
- 108010041952 Calmodulin Proteins 0.000 abstract 1
- 238000003709 image segmentation Methods 0.000 abstract 1
- 238000005316 response function Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of for the blurred picture Angular Point Extracting Method under Minepit environment, comprise propose use inner formword area judging is carried out to image, identify flat areas, angle point territory fast, avoid differentiating further flat areas; Use maximum entropy algorithm calmodulin binding domain CaM characteristic to be background area and foreground area to the Image Segmentation Using in angle point territory, select corresponding angle point threshold value to carry out angle point grid so that follow-up according to zones of different; According to the angle point territory state of check point, exterior sheathing is used to extract candidate angular; Pseudo-operation is gone to candidate angular, removes pseudo-angle point burr point on edge and arrowband meeting angle point response function, obtain final real angle point.Experimental result shows, the present invention is under the prerequisite ensureing real-time, and the fuzzy pit shaft image that existing Robust Algorithm of Image Corner Extraction of comparing is applied to uneven illumination has higher extraction accuracy and better robustness.
Description
Technical Field
The invention relates to the technical field of mine safety and digital image processing, in particular to a fuzzy image corner extraction method used in a mine shaft environment.
Background
The illumination is uneven under the mine shaft environment, along with the development of an image processing technology, the real-time splicing of a mine shaft video scene and the intelligent detection of a fault point attract attention, and the extraction of an angular point of an image is a basic and key problem in the fields of digital image processing and computer vision, and is the basis for realizing the registration splicing, scene analysis and fault detection of the mine shaft image. Specifically, the corner extraction refers to a process of extracting a small number of corners containing rich information and important features of an image from the image by using certain characteristics of the digital image, such as color, shape, gray level, and the like.
In recent years, many researchers have conducted extensive research on the feature point extraction algorithm, and among them, the classic HARRIS algorithm, SUSAN algorithm, MIC algorithm, SIFT algorithm, and the like are widely used for the extraction of feature points. On the basis of the above, many scholars improve the above algorithm to adapt to different application backgrounds. However, due to the low resolution of the video images acquired by the mine shaft, the artificial light source causes uneven image illumination during imaging, poor quality, low contrast of a background area and high contrast of a foreground area; in addition, the environment in the shaft is complex, and when video acquisition in the environment of the moving cage enables video images to be imaged, the edge of an object is blurred. The image processed by the existing angular point extraction algorithm has higher contrast and stronger texture characteristics, and the characteristic point extraction effect for the fuzzy shaft video image with uneven illumination is poorer. One is that the foreground region detects more real corner points while the background region loses a large number of real corner points, and the other is that the background region detects more real corner points while the foreground region detects a large number of false corner points, and the detection of the corner points of the edge region is almost disabled.
In addition, analysis from the aspect of time efficiency, the angular point extraction process is a relatively time-consuming process in the whole image processing, the existing method judges each pixel point in the image, or calculates the gradient change of each pixel point in different directions after filtering and smoothing the image, determines the point with the maximum gradient change in the neighborhood as the angular point, or performs scanning operation on each pixel point by using a circular template, and determines the angular point response according to the kernel value area in the template. The window filtering and the template operation have direct influence on the time complexity of the algorithm, the small template and the small window have small calculation amount but increase the extraction probability of wrong corner points, and the larger template and the larger window increase the calculation amount of the algorithm. The existing corner extraction algorithm cannot meet the real-time requirement of shaft image processing due to long calculation time.
Disclosure of Invention
In view of the above-mentioned problems, an object of the present invention is to provide a method for extracting angular points of a blurred image in a mine environment, so as to effectively extract angular points of a borehole image with characteristics of uneven illumination, blur, etc., with high precision and in real time.
The technical scheme of the invention is a fuzzy image corner extraction method used in a mine environment, which comprises the following steps:
step 1, carrying out region discrimination on an image by using an inner template, and identifying a flat region and an angular point region;
step 2, segmenting the corner region of the image to obtain a background region and a foreground region;
step 3, respectively judging the state of the angular point domain aiming at all detection points in the angular point domain, and setting a state difference threshold value between two points according to the segmentation result of the step 2;
step 4, according to the angular point domain state of the detection points obtained in the step 3, calculating an angular point discrimination function by using an outer template, and extracting candidate angular points;
and 5, performing false removing operation on the candidate corner points obtained in the step 4 to obtain final true corner points.
Moreover, the step 1 is realized by predefining an inner template, wherein the inner template comprises 4 pixels which are uniformly distributed on a circular ring, respectively performing the following local operations by taking each pixel point in the image as a point to be detected based on the inner template,
setting the central point of the inner template at a point X to be detected, setting the pixel points on the image covered by the inner template as P, P ', Q and Q' points respectively, and setting the threshold value T according to the preset pixel gray difference degreedCalculating the difference degree n between the four pixel points P, P ', Q and Q' covered by the inner template and the point X to be detectedd(P)、nd(P')、nd(Q)、nd(Q'), judging whether the point X to be detected is in a flat domain or a corner domain according to the following function,
wherein, Y=P、P'、Q、Q',fYand fXPixel gray values of the pixel point Y and the point X to be detected are obtained;
when F is presentCornerWhen (X) is 0, X is located in the flat region, and when F isCornerWhen (X) is 1, X is located in the corner region.
And the implementation manner of the step 2 is that the corner region of the image is segmented by using a maximum entropy algorithm and combining with the region characteristics, the obtained background region is marked as A, and the obtained foreground region is marked as B.
And the implementation manner of the step 3 is that for the point X to be detected in the angular point domain, the light and dark states S of the four pixel points P, P ', Q and Q' covered by the inner template and the point X to be detected in the angular point domain are continuously determinedX→P、SX→P'、SX→QAnd SX→Q'Judging whether the point X to be detected is located in a bright area or a dark area in the neighborhood according to the following angular point domain state discrimination function,
wherein the state comparison function is Y=P、P'、Q、Q',fIAnd fXThe pixel gray values of the pixel point Y and the point X to be detected and the state difference threshold value between the two points
When S isXWhen the number is positive, the point X to be detected positioned in the corner point domain is positioned in the bright area in the neighborhood,
when S isXWhen the number is negative, the point X to be detected positioned in the corner point domain is positioned in the dark area in the neighborhood,
when S isXWhen the value is 0, the inner template is rotated by 45 degrees clockwise, and S is recalculated according to the newly covered four pixel points P, P', Q and QX。
Furthermore, the implementation manner of step 4 is to predefine a circular outer template, and perform the following operations with each pixel in the angular point domain as the point to be detected by using the outer template,
setting the central point of the outer template at the point X to be detected, calculating the contribution function of any pixel point Z covered by the outer template to the point X to be detected according to the angular point domain state determined in the step 3,
wherein S isX→ZAccording to the calculation of the state comparison function,
more than two pixel sets with contribution function values of 1 and continuous physical positions under the coverage of the outer template form an angular point contribution domain, and the number of pixel points in the angular point contribution domain containing the most pixel points is recorded as NC,
Candidate corner points are extracted according to the following corner point discrimination function,
wherein, FCAnd (X) is 0, then X is detected as a non-corner point, otherwise X is a candidate corner point.
And step 5, performing a false removing operation on the candidate corner points obtained in step 4, including removing false corner points near true corner points, false corner points on a narrow band, noise points and burr points on edges, which meet the corner point discrimination function.
And the method for removing the false corner points near the true corner points meeting the corner point discrimination function is realized by taking F in the neighborhood of each candidate corner pointCAnd the largest (X) value is taken as the true corner point.
Moreover, the method for removing the pseudo corner points on the narrow band is realized by marking the candidate corner points as a detection point X when the candidate corner points covered by the outer template have two corner point contribution domains, and making X-passing points as vectors vertical to the X-passing pointsZXZ' of the line, pixel point M on the line0,M1,M2,M3Distributed at two ends of X and respectively spaced from X by 2 pixels, 1 pixel and 2 pixels, according to the calculated discrimination function of narrow-band pseudo corner points as follows,
wherein, FS(M0,X),FS(M1,X),FS(M2X) and FS(M3X) calculating according to the state comparison function and the contribution function; when F is presentbindWhen the value is 1, the point X to be detected is taken as a pseudo corner point and is removed; when F is presentbindWhen the value is 1, the point X to be detected is reserved as a true corner point.
Moreover, the noise point and the burr point on the edge are removed by recording the candidate corner points to be processed as a detection point X, and setting two terminal pixel points of the largest corner point contribution domain CCSN as X1、X2,
Starting from the central coordinate (X, y) of the detection point X to X1Central coordinate (x) of1,y1) Set is the Set of pixels on the line segment of1The number of pixels is n1Wherein F is calculated from the contribution functionS(Xi,X)=1(Xi∈Set1) Has a number of pixels of nS1A plurality of; starting from the X center point coordinate (X, y) to X2Center point coordinate (x)2,y2) Set is the Set of pixels on the line segment of2The number of pixels is n2Wherein F is calculated by the formula (8)S(Xi,X)=1(Xi∈Set2) Has a number of pixels of nS2A plurality of;
is calculated according to the following discriminant function
When F is presentnsWhen (X) is 1, the point to be detected is judged as noise point or burr point on edge and is removed, when F isnsAnd (X) when the value of (X) is 1, the point X to be detected is reserved as a true angle point.
The invention has the following advantages and positive effects:
(1) the method can rapidly identify the flat domain and the angular point domain of the image by using a smaller internal template, avoids further calculation of pixels in the flat domain, and only processes the angular point domain containing a small number of pixels in subsequent calculation, so that the efficiency of the algorithm is greatly improved;
(2) the method divides the angular point domain of the image, and different regions use different difference threshold values, thereby solving the problem of angular point detection failure caused by uneven illumination of the image;
(3) the invention only uses the circular ring as the outer template instead of the traditional circular template to carry out angular point detection, thereby further improving the execution efficiency of the algorithm;
(4) the method and the device count the number of continuous pixel points in the angular point contribution domain by combining the light and shade states of the angular point domain, thereby solving the problem of angular point detection failure caused by edge blurring;
(5) the invention uses the pseudo-corner point removing algorithm to remove the detected burr removing points and the pseudo-corner points on the narrow band, thereby enabling the algorithm to have higher detection precision.
Drawings
Fig. 1 is a schematic diagram of an inner template for region identification according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an outer template for corner extraction according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of removing a pseudo corner point on a narrow band according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following by combining the drawings and the embodiment.
The technical scheme of the invention can adopt a computer software technology to realize an automatic operation process, and the process of the fuzzy image corner extraction method used in the mine environment sequentially comprises the following steps:
1. and (4) carrying out region discrimination on the image by using an inner template, and quickly identifying a flat domain and an angular point domain.
As shown in FIG. 1, the embodiment predefines an inner template M comprising 4 pixels evenly distributed over a circleIAnd performing local fast operation on each pixel point except the edge in the image. Can be provided with an inner template MIStarting from the center, respectively taking a pixel point with the same distance in four directions of up, down, left and right to obtain an inner template MI4 pixels. In particular, the distance may be set by the user depending on the situation. The embodiment pre-defines an inner template comprising 4 pixels, evenly distributed on a circle with a diameter of 5 pixels. The method uses the defined internal template to distinguish the area of each frame of video image in the shaft video, compares the gray values of four pixel points covered by the template with the gray value of a point to be detected, and identifies that the point to be detected is positioned in a flat domain or a corner point domain according to the geometric characteristics of the corner point and the gray comparison, and comprises the following steps:
the center of the inner template is positioned at a point X to be detected in the image, and pixel points of the image covered by the inner template are P, P ', Q and Q' points respectively. According to the geometric characteristics of the corner points, if X is detected as the corner point, at least three points of the four points are darker than X or three points are lighter than X, and when the number of the four points is less than three, the X point is considered to be located in a flat domain. The invention provides a calculation formula of the difference degree Y=P、P'、Q、Q',fYAnd fXThe gray values of the pixel point Y and the pixel point X to be detected are obtained. I.e. define nd(P) is the difference between the point P and the point X. The difference between the P ', Q, Q' and P points is defined identically and is denoted as nd(P')、nd(Q)、nd(Q')。
Wherein f isPAnd fXPixel gray scale values, T, for P and X pointsdIs a preset pixel gray difference threshold value, ndA value of 1 indicates a large difference between the two points, ndA value of 0 indicates that points P and X are similar.
Different TdThe values affect the accuracy and the timeliness of the algorithm. In specific implementation, the skilled person can preset according to the suggested value principle: low contrast image selection with smaller TdOtherwise, a larger T is selectedd,TdToo large a possible loss of a distorted corner, TdToo small affects the time efficiency of the algorithm. Experiments show that the fuzzy image T aiming at uneven illumination in the shaftdChoose 6 betterAnd (6) obtaining the result.
Example definition of Flat Domain discriminant function FCorner:
In the formula, FCorner(X)FCornerWhen the value is 0, the point X to be detected is located in the flat area, and the point X to be detected does not need to be further distinguished, and when F isCorner(X)FCornerWhen the number of the detected points is 1, the point X to be detected is located in the angular point domain, and the point X to be detected needs to be continuously judged in the following steps.
2. Dividing an angular point domain of the image into a background region and a foreground region by using a maximum entropy algorithm and combining with the region characteristics;
the method realizes the selection of the double thresholds by combining the maximum entropy algorithm of the foreground and the background with the characteristics of the area. The maximum entropy threshold algorithm is the prior art, is insensitive to the initial value and has a good classification effect. The examples define: the background area is A, the foreground area is B, and the algorithm is as follows:
(1) statistical grey level histogram and corresponding grey level probability pi,i=0,1,2,…,255;
(2) Smoothing the grey histogram GVLIs the minimum gray value, GV, in the imageHIs the maximum gray value;
(3) defining T as a region segmentation threshold, and calculating A, B the region proportion occupied by the region:
(4) defining an evaluation function according to a maximum entropy algorithm:
(5) repeating the step 4, T is sequentially from GVLTo GVHTaking value to obtain the maximum FHAnd T corresponding to the value is completed, namely the primary segmentation of the region is completed.
(6) In order to ensure the region consistency, experiments prove that after the initial segmentation is finished, for all pixels in a corner region, if eight neighborhood points in eight directions with the radius of 7 (the radius can be preset by a person skilled in the art) with the point to be detected as the center exist, the neighborhood points in the eight directions with the radius of 7 with the point to be detected as the center are calculated to respectively belong to a background region or a foreground region, and if the point to be detected is initially judged to be located in the foreground region and more than 6 neighborhood points belong to the background region, the point to be detected is corrected to be located in the background region. Similarly, if the point to be detected is located in the background area, more than 6 neighborhood points belong to the foreground area, and the point to be detected is corrected to be located in the foreground area.
3. Judging the state of the angular point domain aiming at all detection points in the angular point domain;
aiming at detection points in the angular point domain, the invention provides an angular point domain state discrimination algorithm as follows:
if the point X to be detected is located in the angular point domain of the image, the states of any pixel point Y in the neighborhood and the point X to be detected are compared to have three states of light, dark and similar. Define the state comparison function as:
wherein, TCThe threshold value of the state difference degree between two points is used, the background area and the foreground area of the image can be distinguished by adopting different threshold values, and the values are obtained according to the formula (6). SX→YA value of-1 indicates that the X-dot is darker than the Y-dot, a value of 0 indicates that the gray values of the X-dot and the Y-dot are similar, and a value of 1 indicates that X is brighter than Y. Here, a state comparison S of pixels P, P ', Q, and Q' covered by the inner template with the detection point X is calculatedX→P,SX→P'、SX→Q、SX→Q'。
According to the T value calculated in the step 2 and the state difference threshold T between two pointsCIs defined as:
on the basis, a state discrimination function of a point X to be detected is defined:
in the formula, SXThe values of (a) can be positive, negative and 0. When S isXWhen the number is positive, the point X to be detected positioned in the corner point domain is positioned in the bright area in the neighborhood, SXWhen the number is negative, the point X to be detected in the corner point domain is located in the dark region in the neighborhood, SXWhen the value is 0, the inner template may be rotated clockwise by 45 degrees, and the formula (5) and the formula (7) are recalculated according to the four new pixel points P, P ', Q, and Q' covered by the inner template at this time. S referred to in the following stepXAre calculated from formula (7) in step 3.
4. According to the angular point domain state of the detection point, an angular point discrimination function is calculated by using an outer template, and candidate angular points are extracted;
as shown in figure 2, a circular ring-shaped outer template M with the diameter R is predefinedOAnd respectively operating by using each pixel in the angular point domain as a point to be detected by using the template, wherein the value of R can be 5, 7, 9, 11, 13 and the like. Experiments show that the R is 13 pixels, and a good result can be obtained by the circular ring-shaped outer template comprising 32 pixel points.
According to the angular point domain state discrimination function calculated by the formula (7), the angular point may be located in a dark region or a bright region, and the gray values of the pixel points covered by the annular outer template and the pixel points to be detected have three states of bright, similar and dark.
Define the exterior template MOThe contribution function of any covered pixel point Z to the point X to be detected is as follows:
in the formula, SX→ZCalculated by formula (5) (it is enough to take pixel point Z as pixel point Y and bring it into formula (5)), SXIs the corner domain state of the detection point X obtained by equation (7). When F is presentSWhen the value of (Z, X) is 1, the pixel point Z contributes to the detection point X.
With more than two contribution function values F under the coverage of external formSA set of physically contiguous pixels of 1 constitutes a corner contribution field, denoted CCSN.
A plurality of corner contribution domains may exist in the pixel points covered by the outer template, and if the corner shape is an "X" or "Y" type corner, a plurality of corner contribution domains exist, and it is specified that each corner contribution domain at least includes more than 2 pixel points. Let the kth corner contribution field be CCSNk={Xi,Xi+1,……,Xj},Xi,Xi+1,……,XjFor the pixel points, j is the corner contribution field CCSNkAnd the total number of the middle pixels. For a noise-free image, the contribution function F of each pixel in the corner contribution domainS(XmX) 1(m i, i +1, …, j), it is recommended that at most two F's are allowed for the borehole image in order to make the algorithm more robust against noiseS(XmAnd X) is 0.
Number N of pixel points in angular point contribution domain containing the most pixel pointsCThe formula of (1) is as follows:
according to the shape property of the angular point, defining the angular point discrimination function of the point X to be detected as follows:
NCless than or equal to 2, and judging that X is positioned on a straight line or is a noise point, NCMore than or equal to 15, judging that X is positioned at the other side of the edge or the corner of the object, and F at the momentC(X) is 0, then X is detected as a non-corner point, otherwise X is a candidate corner point and FCThe value of (X) determines the sharpness of the corner points.
5. And performing false removing operation on the candidate angular points, and sequentially removing false angular points near the true angular points meeting the angular point discrimination function, false angular points on a narrow band, noise points and burr points on edges to obtain final true angular points.
Among the candidate corners extracted by the corner discrimination function, a certain number of pseudo corners satisfying the discrimination function exist. The embodiment removes the detected wrong corner points by adopting the following operations in turn, so that the algorithm has higher robustness.
(1) Removal of false corner points near true corner points
F obtained by the formula (10)C(X) indicates the sharpness of the corner points, FCThe larger the (X) value is, the sharper the corner point is, and F is taken from the neighborhood of each candidate corner point under the condition of satisfying the corner point discrimination functionCAnd taking one point with the maximum (X) value as a true corner point. By counting F in the neighborhoodCAnd (X) non-maximum suppression is carried out, so that the false corners can be effectively removed. In specific implementation, the size of the neighborhood can be set by a person skilled in the art.
(2) Removal of spurious corner points on narrow bands
A plurality of narrow bands exist in a shaft image at the same time, points on the narrow bands also meet an angular point discrimination function and are identified as angular points, and false angular points of the type can be effectively removed through a narrow band false angular point removal algorithm.
Two corner point contribution domains exist in the center of the X-shaped region and the point to be detected on the narrow band, and whether the point is a true corner point needs to be further detected. When the number of the angular point contribution domains is 2, whether the point to be detected is a true angular point or a pseudo angular point on the narrow band needs to be judged, and when the number of the angular point contribution domains is not 2, a pseudo angular point removing algorithm on the narrow band does not need to be executed.
When two corner contribution domains CCSN exist in candidate corner neighborhood covered by external template1And CCSN2Angular point contribution domain CCSN1Has a center of gravity of G1,CCSN2Has a center of gravity of G2. As shown in FIG. 3, the candidate corner points are marked as detecting points X, and the X-passing points are taken as vectors perpendicular to the X-passing pointsZXZ' of the line, pixel point M on the line0,M1,M2,M3Distributed at two ends of the X and respectively spaced from the X by the distance of 2 pixels, 1 pixel and 2 pixels. Defining:
wherein N isbindNumber of pixels on line ZXZ' contributing to X, FbindAnd the function is a narrow-band pseudo corner point discrimination function. State S of detection point X calculated at step 3XBased on the formula (5) and the formula (8), F is calculatedS(M0,X),FS(M1,X),FS(M2X) and FS(M3Value of X), i.e. pixel point M0,M1,M2,M3Substituting into formula (5) as pixel point Y, and substituting into formula (8) as pixel point Z. When F is presentbindWhen the value is 1, the point X to be detected is positioned above the narrow band and is taken as a pseudo corner point to be removed; when F is presentbindWhen the value is 1, the point X to be detected is reserved.
(3) Noise point and burr point removal
Noise points and burr points in the shaft image can meet an angle point discrimination function and are identified as angle points, and such wrong angle points can be effectively removed through a burr point removing algorithm.
If the point X to be detected is a noise point or a burr point on the edge, the point X is identified as a corner point because the point X satisfies the corner point discrimination function. Setting two terminal pixel points of the largest corner contribution domain CCSN as X1、X2Starting from the central coordinate (X, y) of the detection point X to X1Central coordinate (x) of1,y1) Set is the Set of pixels on the line segment of1The number of pixels is n1Wherein F is calculated by the formula (8)S(Xi,X)=1(Xi∈Set1) Has a number of pixels of nS1And (4) respectively. Similarly, starting from the X center point coordinate (X, y) to X2Center point coordinate (x)2,y2) Set is the Set of pixels on the line segment of2The number of pixels is n2Wherein F is calculated by the formula (8)S(Xi,X)=1(Xi∈Set2) Has a number of pixels of nS2And (4) respectively. When n is1Is equal to nS1,n2Is equal to nS2When the two groups are not equal, the detection point is judged to be a noise point or an edge burr point. In order to make the algorithm more resistant to noise, at most two F's are allowed to be stored on a line segmentSIs 0. The discriminant function is defined as:
when F is presentnsWhen (X) is 1, X is distinguished as noise point or burr point on edge and is removed, when F isnsWhen (X) takes a value of 1, X is retained.
Experimental results show that by means of the technical scheme, the fuzzy mine shaft video image with uneven illumination can be effectively extracted, and on the premise that real-time performance is guaranteed, compared with an existing angular point extraction algorithm, the algorithm has higher extraction accuracy and better robustness.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (6)
1. A fuzzy image corner extraction method used in a mine environment is characterized by comprising the following steps:
step 1, carrying out region discrimination on an image by using an inner template, and identifying a flat region and an angular point region;
the step 1 is realized by predefining an inner template, wherein the inner template comprises 4 pixels which are uniformly distributed on a circular ring, and respectively performing the following local operations by taking each pixel point in the image as a point to be detected based on the inner template,
the central point of the inner template is arranged at a point X to be detected,the pixel points on the image covered by the inner template are P, P ', Q and Q' points respectively, and the pixel points are determined according to a preset pixel gray difference threshold TdCalculating the difference degree n between the four pixel points P, P ', Q and Q' covered by the inner template and the point X to be detectedd(P)、nd(P')、nd(Q)、nd(Q'), judging whether the point X to be detected is in a flat domain or a corner domain according to the following function,
wherein, Y=P、P'、Q、Q',fYand fXPixel gray values of the pixel point Y and the point X to be detected are obtained;
when F is presentCornerWhen (X) is 1, X is located in the flat region, and when F isCornerWhen (X) is 0, X is positioned in the angular point domain;
step 2, segmenting the corner region of the image to obtain a background region and a foreground region;
the implementation mode of the step 2 is that a maximum entropy algorithm is used to combine the regional characteristics to segment the corner regions of the image, the obtained background region is marked as A, and the foreground region is marked as B;
step 3, respectively judging the state of the angular point domain aiming at all detection points in the angular point domain, and setting a state difference threshold value between two points according to the segmentation result of the step 2;
the implementation manner of the step 3 is that for the point X to be detected in the corner region, the light and dark states S of the four pixel points P, P ', Q and Q' covered by the inner template compared with the point X to be detected in the corner region are continuously determinedX→P、SX→P'、SX→QAnd SX→Q'Judging whether the point X to be detected is located in a bright area or a dark area in the neighborhood according to the following angular point domain state discrimination function,
SX=Σ(SX→P+SX→P'+SX→Q+SX→Q')
wherein the state comparison function is Y=P、P'、Q、Q',fYAnd fXThe pixel gray values of the pixel point Y and the point X to be detected and the state difference threshold value between the two points T is a region segmentation threshold;
when S isXWhen the number is positive, the point X to be detected positioned in the corner point domain is positioned in the bright area in the neighborhood,
when S isXWhen the number is negative, the point X to be detected positioned in the corner point domain is positioned in the dark area in the neighborhood,
when S isXWhen the value is 0, the inner template is rotated by 45 degrees clockwise, and S is recalculated according to the newly covered four pixel points P, P', Q and QX;
Step 4, according to the angular point domain state of the detection points obtained in the step 3, calculating an angular point discrimination function by using an outer template, and extracting candidate angular points;
and 5, performing false removing operation on the candidate corner points obtained in the step 4 to obtain final true corner points.
2. The method for extracting the angular points of the blurred image in the mine environment according to claim 1, wherein: the implementation manner of the step 4 is that a circular outer template is predefined, the following operations are respectively carried out by using each pixel in the angular point domain as a point to be detected,
setting the central point of the outer template at the point X to be detected, calculating the contribution function of any pixel point Z covered by the outer template to the point X to be detected according to the angular point domain state determined in the step 3,
wherein S isX→ZAccording to the calculation of the state comparison function,
more than two pixel sets with contribution function values of 1 and continuous physical positions under the coverage of the outer template form an angular point contribution domain, and the number of pixel points in the angular point contribution domain containing the most pixel points is recorded as NC,
Candidate corner points are extracted according to the following corner point discrimination function,
wherein, FCAnd (X) is 0, then X is detected as a non-corner point, otherwise X is a candidate corner point.
3. The method for extracting the angular points of the blurred image in the mine environment according to claim 2, wherein: and 5, performing false removing operation on the candidate corner points obtained in the step 4, wherein the false removing operation comprises removing false corner points near true corner points, false corner points on narrow bands, noise points and burr points on edges, which meet the corner point discrimination function.
4. The method for extracting the angular points of the blurred image in the mine environment according to claim 3, wherein: removing true corner points satisfying the corner point discrimination functionThe near pseudo corner point is realized by taking F in the neighborhood of each candidate corner pointCAnd the largest (X) value is taken as the true corner point.
5. The method for extracting the angular points of the blurred image in the mine environment according to claim 3, wherein: the method for removing the pseudo corner points on the narrow band is realized in such a way that when the candidate corner points covered by the outer template have two corner point contribution domains CCSN1And CCSN2Time-keeping point contribution domain CCSN1Has a center of gravity of G1,CCSN2Has a center of gravity of G2Marking the candidate angular point as a detection point X, and making the X-passing point as a vector perpendicular to the X-passing pointZXZ' of the line, pixel point M on the line0,M1,M2,M3Distributed at two ends of X and respectively spaced from X by 2 pixels, 1 pixel and 2 pixels, according to the calculated discrimination function of narrow-band pseudo corner points as follows,
Nbind=FS(M0,X)+FS(M1,X)+FS(M2,X)+FS(M3,X)
wherein, FS(M0,X),FS(M1,X),FS(M2X) and FS(M3X) calculating according to the state comparison function and the contribution function; when F is presentbindWhen the value is 1, the point X to be detected is taken as a pseudo corner point and is removed; when F is presentbindWhen the value is 1, the point X to be detected is reserved as a true corner point.
6. The method for extracting the angular points of the blurred image in the mine environment according to claim 3, wherein: the noise point and the burr point on the edge are removed by recording the candidate angular points to be processed as detection points X, and setting two terminal pixel points of the largest angular point contribution domain CCSN as X1、X2,
Starting from the central coordinate (X, y) of the detection point X to X1Central coordinate (x) of1,y1) Set is the Set of pixels on the line segment of1The number of pixels is n1Wherein F is calculated from the contribution functionS(Xi,X)=1(Xi∈Set1) Has a number of pixels of nS1A plurality of; starting from the X center point coordinate (X, y) to X2Center point coordinate (x)2,y2) Set is the Set of pixels on the line segment of2The number of pixels is n2Wherein F is calculated from the contribution functionS(Xi,X)=1(Xi∈Set2) Has a number of pixels of nS2A plurality of;
is calculated according to the following discriminant function
When F is presentnsWhen (X) is 1, the point to be detected is judged as noise point or burr point on edge and is removed, when F isnsAnd (X) when the value of (X) is 1, the point X to be detected is reserved as a true angle point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310124386.2A CN103198319B (en) | 2013-04-11 | 2013-04-11 | For the blurred picture Angular Point Extracting Method under the wellbore environment of mine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310124386.2A CN103198319B (en) | 2013-04-11 | 2013-04-11 | For the blurred picture Angular Point Extracting Method under the wellbore environment of mine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103198319A CN103198319A (en) | 2013-07-10 |
CN103198319B true CN103198319B (en) | 2016-03-30 |
Family
ID=48720851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310124386.2A Expired - Fee Related CN103198319B (en) | 2013-04-11 | 2013-04-11 | For the blurred picture Angular Point Extracting Method under the wellbore environment of mine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103198319B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354582B (en) * | 2015-11-20 | 2019-04-02 | 武汉精测电子集团股份有限公司 | Image Angular Point Extracting Method and device and image angle point grid photographic device |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106416242A (en) * | 2014-02-13 | 2017-02-15 | 高地技术解决方案公司 | Method of enhanced alignment of two means of projection |
CN105513037B (en) * | 2014-09-30 | 2018-06-22 | 展讯通信(上海)有限公司 | Angular-point detection method and device |
CN104751458B (en) * | 2015-03-23 | 2017-08-25 | 华南理工大学 | A kind of demarcation angular-point detection method based on 180 ° of rotation operators |
CN106682678B (en) * | 2016-06-24 | 2020-05-01 | 西安电子科技大学 | Image corner detection and classification method based on support domain |
CN106780611A (en) * | 2016-12-10 | 2017-05-31 | 广东文讯科技有限公司 | One kind uses intelligent terminal camera angular-point detection method |
CN106845494B (en) * | 2016-12-22 | 2019-12-13 | 歌尔科技有限公司 | Method and device for detecting contour corner points in image |
CN108960012B (en) * | 2017-05-22 | 2022-04-15 | 中科创达软件股份有限公司 | Feature point detection method and device and electronic equipment |
CN111028177B (en) * | 2019-12-12 | 2023-07-21 | 武汉大学 | Edge-based deep learning image motion blur removing method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101887586A (en) * | 2010-07-30 | 2010-11-17 | 上海交通大学 | Self-adaptive angular-point detection method based on image contour sharpness |
CN102789637A (en) * | 2012-07-12 | 2012-11-21 | 北方工业大学 | Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8175376B2 (en) * | 2009-03-09 | 2012-05-08 | Xerox Corporation | Framework for image thumbnailing based on visual similarity |
US8456711B2 (en) * | 2009-10-30 | 2013-06-04 | Xerox Corporation | SUSAN-based corner sharpening |
-
2013
- 2013-04-11 CN CN201310124386.2A patent/CN103198319B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101887586A (en) * | 2010-07-30 | 2010-11-17 | 上海交通大学 | Self-adaptive angular-point detection method based on image contour sharpness |
CN102789637A (en) * | 2012-07-12 | 2012-11-21 | 北方工业大学 | Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator |
Non-Patent Citations (2)
Title |
---|
A fast corner detector for fuzzy mineshaft images based on dual-threshold;Yuan-Xiu Xing 等;《2012 International Conference on Wavelet Active Media Technology and Information Processing (ICWAMTIP)》;20121219;摘要,第132页第2.1节至第134页第3.2节,图1-3 * |
An Adaptive Corner Detecting for Real-Time Applications;Yuanxiu Xing 等;《2012 International Conference on Audio, Language and Image Processing (ICALIP)》;20120718;第92页第2章至第93页第3章,图1-3 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354582B (en) * | 2015-11-20 | 2019-04-02 | 武汉精测电子集团股份有限公司 | Image Angular Point Extracting Method and device and image angle point grid photographic device |
Also Published As
Publication number | Publication date |
---|---|
CN103198319A (en) | 2013-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103198319B (en) | For the blurred picture Angular Point Extracting Method under the wellbore environment of mine | |
CN108549874B (en) | Target detection method, target detection equipment and computer-readable storage medium | |
CN107578035B (en) | Human body contour extraction method based on super-pixel-multi-color space | |
CN108805023B (en) | Image detection method, device, computer equipment and storage medium | |
Zhang et al. | Object-oriented shadow detection and removal from urban high-resolution remote sensing images | |
CN104966085B (en) | A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features | |
CN111145209B (en) | Medical image segmentation method, device, equipment and storage medium | |
CN110163219B (en) | Target detection method based on image edge recognition | |
CN103366167B (en) | System and method for processing image for identifying alphanumeric characters present in a series | |
CN103020965B (en) | A kind of foreground segmentation method based on significance detection | |
CN107092871B (en) | Remote sensing image building detection method based on multiple dimensioned multiple features fusion | |
WO2022027931A1 (en) | Video image-based foreground detection method for vehicle in motion | |
WO2017054314A1 (en) | Building height calculation method and apparatus, and storage medium | |
CN104134200B (en) | Mobile scene image splicing method based on improved weighted fusion | |
CN109241973B (en) | Full-automatic soft segmentation method for characters under texture background | |
CN106096491B (en) | Automatic identification method for microaneurysms in fundus color photographic image | |
CN108320294B (en) | Intelligent full-automatic portrait background replacement method for second-generation identity card photos | |
CN103310439A (en) | Method for detecting maximally stable extremal region of image based on scale space | |
CN104484652A (en) | Method for fingerprint recognition | |
CN111079688A (en) | Living body detection method based on infrared image in face recognition | |
CN105631871A (en) | Color image duplicating and tampering detection method based on quaternion exponent moments | |
CN111027637A (en) | Character detection method and computer readable storage medium | |
CN117765287A (en) | Image target extraction method combining LWR and density clustering | |
CN117611819A (en) | Image processing method and device | |
CN107704864B (en) | Salient object detection method based on image object semantic detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160330 Termination date: 20170411 |
|
CF01 | Termination of patent right due to non-payment of annual fee |