CN107220962B - Image detection method and device for tunnel cracks - Google Patents

Image detection method and device for tunnel cracks Download PDF

Info

Publication number
CN107220962B
CN107220962B CN201710227430.0A CN201710227430A CN107220962B CN 107220962 B CN107220962 B CN 107220962B CN 201710227430 A CN201710227430 A CN 201710227430A CN 107220962 B CN107220962 B CN 107220962B
Authority
CN
China
Prior art keywords
image
crack
saliency map
value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710227430.0A
Other languages
Chinese (zh)
Other versions
CN107220962A (en
Inventor
龚秋明
殷丽君
杜修力
刘永强
赵振威
岳博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiurui Technology Co ltd
Beijing University of Technology
Original Assignee
Beijing Jiurui Technology Co ltd
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiurui Technology Co ltd, Beijing University of Technology filed Critical Beijing Jiurui Technology Co ltd
Priority to CN201710227430.0A priority Critical patent/CN107220962B/en
Publication of CN107220962A publication Critical patent/CN107220962A/en
Application granted granted Critical
Publication of CN107220962B publication Critical patent/CN107220962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention discloses an image detection method and device for tunnel cracks. The method comprises the following steps: carrying out bilateral filtering processing on an image to be detected of the tunnel crack to obtain a filtered image; respectively constructing a brightness saliency map of the filtered image and a texture saliency map of the filtered image by using the visual saliency model; fusing the brightness saliency map and the texture saliency map to obtain a fused saliency map; segmenting the fusion saliency map by a self-adaptive threshold algorithm to obtain a crack region image; and obtaining the crack parameters of the crack area image when the crack of the crack area image is judged to be the real crack. According to the image detection method provided by the embodiment of the invention, the cracks with poor continuity and low contrast can be effectively detected.

Description

Image detection method and device for tunnel cracks
Technical Field
The invention relates to the technical field of image processing and identification, in particular to an image detection method and device for tunnel cracks.
Background
With the wide application of image acquisition devices such as digital cameras, ultra-high speed scanners and the like, the technical level and the production efficiency of industrial production are continuously improved, and higher requirements are also imposed on the production detection capability matched with the industrial production technology. The image processing technology is increasingly developed, the image detection technology is widely applied to the fields of industrial production process detection, daily life safety detection and the like, and the production efficiency of enterprises and the living standard of people are greatly improved.
In the technical field of image processing and recognition, generally, an image segmentation method is generally adopted for detecting and recognizing a specific area in an image in the detection of industrial safety. The existing image segmentation method mainly utilizes the integral gray difference between an interested area and a background area to select a proper threshold value to segment an image to obtain the interested area. When the illumination is not uniform or the gray difference between the region to be detected and the background region is small, the region of interest often cannot be accurately segmented.
Disclosure of Invention
The embodiment of the invention provides an image detection method and device for a tunnel crack, which can effectively detect the crack with poor continuity and low contrast.
According to an aspect of the embodiments of the present invention, there is provided an image detection method for a tunnel crack, the image detection method including: carrying out bilateral filtering processing on an image to be detected of the tunnel crack to obtain a filtered image; respectively constructing a brightness saliency map of the filtered image and a texture saliency map of the filtered image by using the visual saliency model; fusing the brightness saliency map and the texture saliency map to obtain a fused saliency map; segmenting the fusion saliency map by a self-adaptive threshold algorithm to obtain a crack region image; and obtaining the crack parameters of the crack area image when the crack of the crack area image is judged to be the real crack.
According to another aspect of the embodiments of the present invention, there is provided an image detecting apparatus for a tunnel crack, the image detecting apparatus including: the image filtering module is used for carrying out bilateral filtering processing on the image to be detected of the tunnel crack to obtain a filtered image; the saliency map construction module is used for respectively constructing a brightness saliency map of the filtered image and a texture saliency map of the filtered image by utilizing the visual saliency model; the saliency map fusion module is used for fusing the brightness saliency map and the texture saliency map to obtain a fusion saliency map; the saliency map segmentation module is used for segmenting the fusion saliency map through a self-adaptive threshold algorithm to obtain a crack region image; and the crack parameter acquisition module is used for acquiring the crack parameters of the crack area image when the crack of the crack area image is judged to be the real crack.
According to the image detection method and device provided by the embodiment of the invention, the brightness saliency map and the texture saliency map of the image to be detected are constructed, and the brightness saliency map and the texture saliency map are fused to obtain the fusion saliency map, so that cracks with poor continuity and low contrast can be highlighted and enhanced in the image, the obtained fusion saliency map is segmented, and crack parameter information is judged and counted, so that the tunnel cracks are effectively detected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating an image detection method of a tunnel crack according to an embodiment of the invention;
FIG. 2 is a schematic structural diagram illustrating an image detection apparatus for tunnel cracks according to an embodiment of the present invention;
fig. 3 is a block diagram illustrating an exemplary hardware architecture of a computing device capable of implementing the image detection method and apparatus for tunnel cracks according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The following describes in detail an image detection method and apparatus for tunnel cracks provided according to an embodiment of the present invention with reference to the accompanying drawings. It should be noted that the embodiments described in this disclosure are not intended to limit the scope of the disclosure.
Fig. 1 is a flowchart illustrating an image detection method of a tunnel crack according to an embodiment of the present invention. As shown in fig. 1, the image detection method 100 for tunnel cracks in the present embodiment includes the following steps:
and step S110, carrying out bilateral filtering processing on the image to be detected of the tunnel crack to obtain a filtered image.
And step S120, respectively constructing a brightness saliency map of the filtered image and a texture saliency map of the filtered image by using the visual saliency model.
And step S130, fusing the brightness saliency map and the texture saliency map to obtain a fused saliency map.
And step S140, segmenting the fusion saliency map through a self-adaptive threshold algorithm to obtain a crack region image.
And S150, acquiring the crack parameters of the crack area image when the crack of the crack area image is judged to be a real crack.
According to the image detection method provided by the embodiment of the invention, the tunnel crack can be effectively detected, and a better detection result is obtained.
As an alternative embodiment, the step of the bilateral filtering processing in step S110 may specifically include:
and step S111, counting the gray variance value of each pixel in the specified neighborhood in the image to be detected.
In this step, the specified neighborhood may be, for example, a rectangular window of size M × N pixels, and the value of M or N may be an odd number between 7 and 21.
And step S112, calculating the average gray variance value of the image to be detected according to the gray variance value of each pixel.
And step S113, setting a gray variance parameter according to the average gray variance value, and constructing a bilateral filter kernel function based on the gray variance parameter.
And step S114, carrying out bilateral filtering on the image to be detected through a convolution template formula by utilizing the constructed bilateral filter kernel function, and taking the image to be detected after bilateral filtering as a filtered image.
For ease of understanding, as an example, the embodiment of the present invention calculates the mean gray variance σ of the image to be detected as σrAs a gray variance parameter, a bilateral filter kernel function was constructed using the following equation (1).
Figure BDA0001265096410000041
In the above formula (1), ω (i, j, k, l) represents a weight coefficient of each pixel point in the specified neighborhood, parameter i and parameter j represent coordinates of image pixel points in the specified neighborhood, parameter k and parameter 1 represent coordinates corresponding to the gaussian template to which the parameter k and the parameter 1 belong, and σ representsdRepresenting the variance, σ, of a spatial Gaussian templaterRepresenting the variance of the value domain gaussian template. In bilateral filtering, the weight coefficient ω (i, j, k, l) is a value domain gaussian template ωr(i, j, k, l) and a spatial Gaussian template ωdThe product of (i, j, k, l), i.e., ω (i, j, k, l) ═ ωd(i,j,k,l)×ωr(i, j, k, l), and, a value domain Gaussian template
Figure BDA0001265096410000042
Airspace Gaussian template
Figure BDA0001265096410000043
Using the constructed bilateral filter kernel function (1), bilateral filter filtering is performed using the following equation (2):
Figure BDA0001265096410000044
in the above formula (2), I' (I, j) represents image data obtained after the bilateral filter.
In the embodiment, the image to be detected is preprocessed by adopting a bilateral filtering method, so that the noise of the image to be detected can be eliminated, the image edge information can be kept, the image to be processed is smooth, and the image detail can be obviously protected.
In the embodiment of the present invention, the visual saliency model is an algorithm proposed according to the human visual attention mechanism, which means that the attention of human vision is usually focused on a brightness region or a texture region with a sudden change in visual signals.
In some embodiments, the step of constructing the luminance saliency map of the filtered image and the texture saliency map of the filtered image in step S120 may further include:
step S121, constructing a brightness Gaussian pyramid of the filtered image according to the visual saliency model, wherein the brightness Gaussian pyramid comprises a preset number of layers.
In the embodiment of the present invention, a gaussian pyramid is used as a multi-scale representation method of an image, which can efficiently extract features of the image at different scales, and a process of constructing a brightness gaussian pyramid in the present invention is described in detail below.
Firstly, extracting brightness features in an original image to be processed to obtain a brightness feature original image, and taking the original image as a bottom layer image of a pyramid; then, taking the image obtained by Gaussian function filtering of the original image as a second-layer image of a brightness Gaussian pyramid, wherein the length of the second-layer image is 1/2 of the length of the bottom-layer image, and the width of the second-layer image is 1/2 of the width of the bottom-layer image; and taking the image of the second layer of image after Gaussian function filtering as a third layer of image of the brightness Gaussian pyramid, wherein the length of the third layer of image is 1/2 of the length of the second layer of image, the width of the third layer of image is 1/2 and … … of the width of the second layer of image, and so on, and constructing the brightness Gaussian pyramid of the image after continuous Gaussian filtering. The gaussian pyramid of luminance is, except for the bottom layer, 1/2 the length of each layer of image from high to low is the length of the next layer of image adjacent to the layer, and the width of each layer of image is 1/2 the width of the next layer of image adjacent to the layer.
As an example, the number of layers of the gaussian pyramid with luminance constructed in the embodiment of the present invention is 5.
And S122, utilizing a Gabor filter function to calculate a Gabor filter image of each layer of image of the brightness Gauss pyramid in four directions of 0, pi/4, pi/2 and 3 pi/4 to obtain a texture Gauss pyramid of the filtered image.
Specifically, a bobbe filtering image of each layer of the brightness gaussian pyramid in four directions of 0, pi/4, pi/2 and 3 pi/4 is calculated through the following formula (3) to form a texture pyramid:
Figure BDA0001265096410000051
in the above-mentioned formula (3),
Figure BDA0001265096410000052
the kernel function of the dobby filter has the following parameters: wavelength lambda, direction theta, phase shift
Figure BDA0001265096410000061
And the standard deviation sigma, the length-width ratio gamma, and x and y are coordinates of image pixel points of the current layer of the brightness Gaussian pyramid.
Step S123, respectively acquiring each layer of image except the bottom layer from high to low according to the layer number of the brightness Gaussian pyramid as an image to be processed, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating the absolute value of image subtraction operation between the image obtained by up-sampling and the next layer of image adjacent to the image to be processed to obtain a brightness saliency map.
And step S124, respectively acquiring each layer of image except the bottom layer from high to low according to the layer number of the texture Gaussian pyramid as an image to be processed, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating the absolute value of image subtraction operation of the image obtained by up-sampling and the next layer of image adjacent to the image to be processed to obtain a texture saliency map.
In step S123 or step S124, an image subtraction operation is performed using the following formula (4), and an absolute value is obtained from the result of the image subtraction operation:
s(i,j)=|p(i)-pT(j)| (4)
in the above formula (4), p (i) represents the i-th layer pyramid image, p(j) Means that upsampling the pyramid image of the j-th layer results in an image with the same resolution as the image of the i-th layer, and i < j.
In the embodiment of the present invention, the image subtraction operation refers to a point-to-point subtraction operation of pixel values between pixel points of two or more images. Taking the image subtraction operation of two images as an example, each pixel point of one image is respectively obtained as a first pixel point, and the pixel value of the first pixel point and the pixel value of the pixel point with the same position as the first pixel point in the other image are subjected to subtraction operation to obtain the image subtraction operation result of the one image and the other image.
By the image subtraction operation in the embodiment of the invention, the unwanted superimposed pattern in the image to be detected can be removed when the crack is detected, and when the crack has low contrast and poor continuity, the background image can be effectively removed, so that the crack display effect is enhanced, and a better detection effect is obtained in the subsequent crack detection.
In some embodiments, the step of fusing the luminance saliency map and the texture saliency map in step S130 may specifically include:
step S131, respectively obtaining a maximum pixel value of a pixel point in the brightness saliency map as a first pixel value, obtaining a maximum pixel value of a pixel point in the texture saliency map as a second pixel value, dividing the pixel value of the pixel point in the brightness saliency map by the first pixel value to obtain a normalized brightness saliency map, and dividing the pixel value of the pixel point in the texture saliency map by the second pixel value to obtain a normalized texture saliency map.
Specifically, the normalization process of the luminance saliency map or the texture saliency map is performed by the following formula (5):
sn(i,j)=s(i,j)/M(i,j) (5)
when the luminance saliency map is normalized using the above equation (5), snAnd (i, j) represents a normalized luminance saliency map obtained after the luminance saliency map is normalized, s (i, j) is a pixel value of a pixel point in the luminance saliency map, and M (i, j) is a maximum pixel value of the pixel point in the luminance saliency map.
When the texture saliency map is normalized using the above equation (5), snAnd (i, j) represents the normalized texture saliency map obtained after the normalization processing of the texture saliency map, s (i, j) is the pixel value of a pixel point in the texture saliency map, and M (i, j) is the maximum pixel value of the pixel point in the texture saliency map.
Step S132, comparing the pixel value of each pixel point in the normalized brightness saliency map with the pixel point in the normalized texture saliency map at the same position as each pixel point, and taking the maximum value obtained by comparison as the pixel value of the pixel point in the fusion saliency map at the same position as each pixel point to obtain a fusion saliency map.
Specifically, the luminance saliency map and the texture saliency map are fused by the following formula (6):
Figure BDA0001265096410000071
in the above-mentioned step (6),
Figure BDA0001265096410000072
is a normalized luminance saliency map obtained after normalization processing,
Figure BDA0001265096410000073
is a normalized texture saliency map obtained after normalization processing.
In some embodiments, the adaptive threshold algorithm in step S140 is a maximum inter-class variance method, and step S140 may further include:
and step S141, calculating to obtain the optimal crack segmentation threshold value of the fusion saliency map by a maximum inter-class variance method.
In the embodiment of the present invention, the maximum inter-class variance method is also referred to as OTSU algorithm or madzu algorithm, and is a method for maximizing inter-class variance between a foreground image region and a background image region after segmentation.
Specifically, the selection of the optimal crack segmentation threshold may be represented by the following formula (7):
T=arg maxt(var(I<t)+var(I≥t)) (7)
in the above formula (7), var (I < T) represents the variance of the image region having a gray value smaller than the value T with respect to the overall gray level mean value, var (I ≧ T) represents the variance of the image region having a gray value not smaller than the value T with respect to the overall gray level mean value, T represents the segmentation threshold, and T represents the value of T when var (I < T) + var (I ≧ T) takes the maximum value.
That is, the entire image data is divided into two classes using a crack segmentation threshold, which is the optimal crack segmentation threshold if the variance between the two classes is the largest.
And S142, segmenting the fusion saliency map by using the optimal crack segmentation threshold value to obtain a candidate crack region image of the fusion saliency map.
In the embodiment, the optimal crack segmentation threshold is obtained according to the maximum inter-class variance method, so that the misclassification probability of segmenting the image to be detected by applying the optimal crack segmentation threshold can be minimized, and the accuracy of the candidate crack region image obtained after image segmentation is improved.
In some embodiments, step S150 may specifically include:
step S151, respectively calculating the mean value of the gray values of the pixel points in the crack area image and the mean value of the gray values of the pixel points in the specified area adjacent to the crack area image.
Step S152, if the mean value of the gray values of the pixel points in the crack area is smaller than the mean value of the gray values of the pixel points in the designated area, the crack of the crack area is judged to be a real crack.
In the step, the characteristic that the crack is low in gray level is utilized, the gray level mean value of pixel points in the adjacent area of the candidate crack area obtained by segmentation is counted, and if the gray level mean value of the crack is smaller than the gray level mean value of the adjacent image, the crack is judged to be a real crack.
Step S153, the crack area image is divided into two values to obtain a crack two-value image of the crack area image, skeletonization operation is carried out on the crack two-value image to obtain crack parameters of the crack area image, and the crack parameters comprise the crack length and the crack width.
The image skeletonization is a method for analyzing a line-like image, and is used for identifying and counting size and shape information of a crack, such as the length and width of the crack, by extracting a skeleton of the crack determined as a real crack.
According to the image detection method provided by the embodiment of the invention, the crack area in the image can be accurately detected and identified, and a good detection effect can be obtained when the crack continuity is poor and the contrast is low.
The following describes an image detection apparatus for tunnel cracks according to an embodiment of the present invention in detail with reference to the accompanying drawings.
Fig. 2 is a schematic structural diagram of an image detection apparatus for detecting a tunnel crack according to an embodiment of the present invention. As shown in fig. 2, the image detection apparatus 200 in the embodiment of the present invention includes:
and the image filtering module 310 is configured to perform bilateral filtering processing on the image to be detected of the tunnel crack to obtain a filtered image.
And a saliency map construction module 320 for constructing a luminance saliency map of the filtered image and a texture saliency map of the filtered image, respectively, using the visual saliency model.
And the saliency map fusion module 330 is configured to fuse the luminance saliency map and the texture saliency map to obtain a fused saliency map.
And the saliency map segmentation module 340 is configured to segment the fusion saliency map through an adaptive threshold algorithm to obtain a crack region image.
And the crack parameter obtaining module 350 is configured to obtain the crack parameter of the crack region image when the crack of the crack region image is determined to be the real crack.
According to the image detection device provided by the embodiment of the invention, the brightness saliency map and the texture saliency map of the image to be detected are fused to obtain a fusion saliency map, the obtained fusion saliency map is segmented, and crack parameter information is judged and counted to effectively detect the tunnel cracks.
In some embodiments, the image filtering module 310 may further include:
a gray variance value statistic unit 311, configured to count a gray variance value of each pixel in a specified neighborhood in the image to be detected;
an average gray variance value calculating unit 322, configured to calculate an average gray variance value of the image to be detected according to the gray variance value of each pixel;
the bilateral filter kernel function construction unit 323 is used for setting a gray variance parameter according to the average gray variance value and constructing a bilateral filter kernel function based on the gray variance parameter;
and the bilateral filtering function calculating unit 324 is configured to perform bilateral filtering on the image to be detected through a convolution template formula by using the constructed bilateral filter kernel function, and use the image to be detected after bilateral filtering as a filtered image.
In the embodiment, the bilateral filtering can eliminate the noise of the image to be detected, and simultaneously, the edge information of the image is kept, so that the image to be processed is smooth.
In some embodiments, saliency map building module 320 may also include:
a luminance gaussian pyramid constructing unit 321, configured to construct a luminance gaussian pyramid of the filtered image according to the visual saliency model, where the luminance gaussian pyramid includes a predetermined number of layers;
the texture gaussian pyramid constructing unit 322 is configured to calculate a bobber filtering image of each layer of image of the luminance gaussian pyramid in four directions of 0, pi/4, pi/2, and 3 pi/4 by using a bobber filter function, so as to obtain a texture gaussian pyramid of the filtered image;
the brightness saliency map construction unit 323 is used for respectively acquiring each layer of image except the bottom layer as an image to be processed from high to low according to the layer number of the brightness Gaussian pyramid, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating the absolute value of image subtraction operation between the image obtained by up-sampling and the adjacent next layer of image to obtain a brightness saliency map;
and the texture saliency map construction unit 324 is configured to obtain each layer of image except the bottom layer as an image to be processed according to the number of layers of the texture gaussian pyramid from high to low, perform upsampling on the image to be processed to obtain an image with a resolution equal to that of a next layer of image adjacent to the image to be processed, and calculate an absolute value of image subtraction operation between the image obtained through upsampling and the next layer of image adjacent to the image to obtain the texture saliency map.
In the embodiment, the construction of the saliency map strengthens the cracks in the image to be detected, when the crack has low contrast and poor continuity, the background image can be effectively removed, and a good data basis is provided for subsequent crack detection and identification.
In some embodiments, the saliency map fusion module 330 may further include:
the feature map normalization processing unit 331 is configured to obtain a maximum pixel value of a pixel point in the luminance saliency map as a first pixel value, obtain a maximum pixel value of a pixel point in the texture saliency map as a second pixel value, divide the pixel value of the pixel point in the luminance saliency map by the first pixel value to obtain a normalized luminance feature map, and divide the pixel value of the pixel point in the texture saliency map by the second pixel value to obtain a normalized texture feature map;
the fusion saliency map construction unit 332 is configured to compare pixel values of each pixel point in the normalized luminance saliency map with pixel points in the normalized texture saliency map at the same position as each pixel point, and obtain the fusion saliency map by using a maximum value obtained through the comparison as a pixel value of a pixel point in the fusion saliency map at the same position as each pixel point.
In some embodiments, the saliency map segmentation module 340 may further include:
the optimal crack segmentation threshold calculation unit 341 is configured to calculate an optimal crack segmentation threshold fused with the saliency map by using a maximum inter-class variance method;
the saliency map segmentation acquisition module 340 segments the fusion saliency map by using the optimal crack segmentation threshold to obtain a candidate crack region image of the fusion saliency map.
In this embodiment, the segmentation threshold selected by image segmentation is an optimal crack segmentation threshold obtained by a maximum inter-class variance method, and the optimal crack segmentation threshold is used to segment the fused saliency map, so that the error rate of cracks can be effectively reduced.
In some embodiments, the crack parameter acquisition module 350 may further include:
the gray value calculation unit 351 is used for calculating the mean value of the gray values of the pixel points in the crack region image and the mean value of the gray values of the pixel points in the specified region adjacent to the crack region image respectively;
the crack authenticity judging unit 352 is configured to judge that the crack in the crack region is a real crack if the mean of the gray values of the pixels in the crack region is smaller than the mean of the gray values of the pixels in the designated region;
the crack parameter obtaining unit 353 is configured to divide the crack region image into two values to obtain a crack two-value image of the crack region image, and perform skeletonization operation on the crack two-value image to obtain a crack parameter of the crack region image, where the crack parameter includes a crack length and a crack width.
In this embodiment, the crack region obtained by segmentation is determined by using the characteristic that the crack gray scale is low, and after the crack is determined as a real crack, parameter information such as the crack length and the crack width is acquired.
Other details of the image detection apparatus for a tunnel crack according to the embodiment of the present invention are similar to those of the image detection method for a tunnel crack according to the embodiment of the present invention described above with reference to fig. 1, and are not repeated herein.
The image detection method and apparatus for tunnel cracks according to the embodiment of the present invention described in conjunction with fig. 1 and fig. 2 may be implemented by a computing device that is detachably or fixedly installed on an application server device. Fig. 3 is a block diagram illustrating an exemplary hardware architecture of a computing device capable of implementing the image detection method and apparatus for tunnel cracks according to an embodiment of the present invention. As shown in fig. 3, computing device 300 includes an input device 301, an input interface 302, a central processor 303, a memory 304, an output interface 305, and an output device 306. The input interface 302, the central processing unit 303, the memory 304, and the output interface 305 are connected to each other through a bus 310, and the input device 301 and the output device 306 are connected to the bus 310 through the input interface 302 and the output interface 305, respectively, and further connected to other components of the computing device 300. Specifically, the input device 301 receives image input information from the outside (for example, an image pickup device or a digital camera), and transmits the input information to the central processor 303 through the input interface 302; central processor 303 processes the input information based on computer-executable instructions stored in memory 304 to generate output information, stores the output information temporarily or permanently in memory 304, and then transmits the output information to output device 306 through output interface 305; output device 306 outputs the output information external to computing device 300 for use by the user.
That is, the computing device shown in fig. 3 may also be implemented to include: a memory storing computer-executable instructions; and a processor which, when executing computer executable instructions, may implement the image detection method and apparatus for tunnel cracks described in connection with fig. 1-2. Here, the processor may communicate with an image acquisition module such as an image management system or an image sensor mounted on the device to be detected, so as to execute computer-executable instructions based on relevant information from the image management system and/or the image sensor, thereby implementing the image detection method and device of the tunnel crack described in conjunction with fig. 1 to 2.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (10)

1. An image detection method for a tunnel crack, the image detection method comprising:
carrying out bilateral filtering processing on an image to be detected of the tunnel crack to obtain a filtered image;
respectively constructing a brightness saliency map of the filtered image and a texture saliency map of the filtered image by using a visual saliency model;
fusing the brightness saliency map and the texture saliency map to obtain a fused saliency map;
segmenting the fusion saliency map through a self-adaptive threshold algorithm to obtain a crack region image;
when the crack of the crack area image is judged to be a real crack, obtaining crack parameters of the crack area image;
the respectively constructing a brightness saliency map and a texture saliency map of the filtered image by using a visual saliency model comprises:
constructing a brightness Gaussian pyramid of the filtered image according to a visual saliency model, wherein the brightness Gaussian pyramid comprises a preset number of layers;
obtaining a texture Gaussian pyramid of the filtered image by calculating a Gerber filtering image of each layer of image of the brightness Gaussian pyramid in four directions of 0, pi/4, pi/2 and 3 pi/4 by utilizing a Gerber filter function;
respectively acquiring each layer of image except the bottom layer as an image to be processed according to the number of layers of the brightness Gaussian pyramid from high to low, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating an absolute value of image subtraction operation between the image obtained by up-sampling and the next layer of image adjacent to the image to obtain the brightness saliency map;
and respectively acquiring each layer of image except the bottom layer as an image to be processed according to the number of layers of the texture Gaussian pyramid from high to low, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating the absolute value of image subtraction operation between the image obtained by up-sampling and the next layer of image adjacent to the image to be processed to obtain the texture saliency map.
2. The image detection method according to claim 1, wherein the filtering the image to be detected of the tunnel crack to obtain a filtered image comprises:
counting the gray variance value of each pixel in a specified neighborhood in the image to be detected;
calculating the gray scale variance value of each pixel to obtain the average gray scale variance value of the image to be detected;
setting a gray variance parameter according to the average gray variance value, and constructing a bilateral filter kernel function based on the gray variance parameter;
and carrying out bilateral filtering on the image to be detected by using the constructed bilateral filter kernel function through a convolution template formula, and taking the image to be detected after bilateral filtering as the filtered image.
3. The image detection method according to claim 1, wherein fusing the luminance saliency map and the texture saliency map to obtain a fused saliency map comprises:
respectively obtaining the maximum pixel value of a pixel point in the brightness saliency map as a first pixel value, obtaining the maximum pixel value of a pixel point in the texture saliency map as a second pixel value, dividing the pixel value of the pixel point in the brightness saliency map by the first pixel value to obtain a normalized brightness saliency map, and dividing the pixel value of the pixel point in the texture saliency map by the second pixel value to obtain a normalized texture saliency map;
and comparing each pixel point in the normalized brightness saliency map with a pixel point in the normalized texture saliency map at the same position as each pixel point, and taking the maximum value obtained by comparison as the pixel value of the pixel point in the fusion saliency map at the same position as each pixel point to obtain the fusion saliency map.
4. The image detection method of claim 1, wherein the adaptive threshold algorithm is a maximum between class variance method;
the segmenting the fusion saliency map through the adaptive threshold algorithm to obtain a candidate crack region image comprises the following steps:
calculating to obtain an optimal crack segmentation threshold value of the fusion saliency map by the maximum inter-class variance method;
and segmenting the fusion saliency map by using the optimal crack segmentation threshold value to obtain a candidate crack region image of the fusion saliency map.
5. The image detection method according to claim 1, wherein the obtaining of the crack parameters of the crack region image when the crack of the crack region image is determined to be a real crack comprises:
respectively calculating the mean value of the gray values of the pixel points in the crack area image and the mean value of the gray values of the pixel points in the specified area adjacent to the crack area image;
if the mean value of the gray values of the pixel points in the crack area is smaller than the mean value of the gray values of the pixel points in the specified area, judging that the crack of the crack area is a real crack;
and binary segmentation is carried out on the crack region image to obtain a crack binary image of the crack region image, skeletonization operation is carried out on the crack binary image to obtain crack parameters of the crack region image, and the crack parameters comprise crack length and crack width.
6. An image detection apparatus for a tunnel crack, the image detection apparatus comprising:
the image filtering module is used for carrying out bilateral filtering processing on the image to be detected of the tunnel crack to obtain a filtered image;
a saliency map construction module for constructing a luminance saliency map of the filtered image and a texture saliency map of the filtered image, respectively, using a visual saliency model;
the saliency map fusion module is used for fusing the brightness saliency map and the texture saliency map to obtain a fusion saliency map;
the saliency map segmentation module is used for segmenting the fusion saliency map through a self-adaptive threshold algorithm to obtain a crack region image;
the crack parameter acquisition module is used for acquiring the crack parameters of the crack area image when the crack of the crack area image is judged to be a real crack;
the saliency map building module comprises:
the brightness Gaussian pyramid construction unit is used for constructing a brightness Gaussian pyramid of the filtered image according to a visual saliency model, and the brightness Gaussian pyramid comprises a preset number of layers;
the texture Gaussian pyramid construction unit is used for calculating a Gerber filtering image of each layer of image of the brightness Gaussian pyramid in four directions of 0, pi/4, pi/2 and 3 pi/4 by utilizing a Gerber filter function to obtain the texture Gaussian pyramid of the filtered image;
the brightness saliency map construction unit is used for respectively acquiring each layer of image except the bottom layer as an image to be processed according to the number of layers of the brightness Gaussian pyramid from high to low, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating an absolute value of image subtraction operation between the image obtained by the up-sampling and the next layer of image adjacent to the image to be processed to obtain the brightness saliency map;
and the texture saliency map construction unit is used for respectively acquiring each layer of image except the bottom layer as an image to be processed according to the number of layers of the texture Gaussian pyramid from high to low, performing up-sampling on the image to be processed to obtain an image with the resolution being the same as that of the next layer of image adjacent to the image to be processed, and calculating an absolute value of image subtraction operation between the image obtained by the up-sampling and the next layer of image adjacent to the image to be processed to obtain the texture saliency map.
7. The image detection apparatus according to claim 6, wherein the image filtering module comprises:
the gray scale variance value statistical unit is used for counting the gray scale variance value of each pixel in a specified neighborhood in the image to be detected;
the average gray scale variance value calculating unit is used for calculating the average gray scale variance value of the image to be detected according to the gray scale variance value of each pixel;
the bilateral filter kernel function construction unit is used for setting a gray variance parameter according to the average gray variance value and constructing a bilateral filter kernel function based on the gray variance parameter;
and the bilateral filter function computing unit is used for performing bilateral filtering on the image to be detected through a convolution template formula by using the constructed bilateral filter kernel function, and taking the image to be detected after bilateral filtering as the filtered image.
8. The image detection apparatus according to claim 6, wherein the saliency map fusion module includes:
the saliency map normalization processing unit is used for respectively acquiring the maximum pixel value of a pixel point in the brightness saliency map as a first pixel value, acquiring the maximum pixel value of a pixel point in the texture saliency map as a second pixel value, dividing the pixel value of the pixel point in the brightness saliency map by the first pixel value to obtain a normalized brightness saliency map, and dividing the pixel value of the pixel point in the texture saliency map by the second pixel value to obtain a normalized texture saliency map;
and the fusion saliency map construction unit is used for comparing the pixel value of each pixel point in the normalized brightness saliency map with the pixel point in the normalized texture saliency map at the same position as each pixel point, and taking the maximum value obtained by comparison as the pixel value of the pixel point in the fusion saliency map at the same position as each pixel point to obtain the fusion saliency map.
9. The image detection apparatus according to claim 6, wherein the saliency map segmentation acquisition module includes:
the optimal crack segmentation threshold calculation unit is used for calculating and obtaining the optimal crack segmentation threshold of the fusion saliency map through a maximum inter-class variance method;
and the saliency map segmentation acquisition module segments the fusion saliency map by using the optimal crack segmentation threshold value to obtain a candidate crack region image of the fusion saliency map.
10. The image detection apparatus according to claim 6, wherein the crack parameter acquisition module includes:
the gray value calculation unit is used for respectively calculating the mean value of the gray values of the pixel points in the crack area image and the mean value of the gray values of the pixel points in the specified area adjacent to the crack area image;
the crack authenticity judging unit is used for judging that the crack of the crack area is a real crack if the mean value of the gray values of the pixel points in the crack area is smaller than the mean value of the gray values of the pixel points in the specified area;
and the crack parameter acquisition unit is used for binary segmentation of the crack region image to obtain a crack binary image of the crack region image, and performing skeletonization operation on the crack binary image to obtain crack parameters of the crack region image, wherein the crack parameters comprise crack length and crack width.
CN201710227430.0A 2017-04-07 2017-04-07 Image detection method and device for tunnel cracks Active CN107220962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710227430.0A CN107220962B (en) 2017-04-07 2017-04-07 Image detection method and device for tunnel cracks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710227430.0A CN107220962B (en) 2017-04-07 2017-04-07 Image detection method and device for tunnel cracks

Publications (2)

Publication Number Publication Date
CN107220962A CN107220962A (en) 2017-09-29
CN107220962B true CN107220962B (en) 2020-04-21

Family

ID=59927549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710227430.0A Active CN107220962B (en) 2017-04-07 2017-04-07 Image detection method and device for tunnel cracks

Country Status (1)

Country Link
CN (1) CN107220962B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102650554B1 (en) * 2018-10-30 2024-03-22 삼성디스플레이 주식회사 Device for inspecting display device and inspectnig method thereof
CN111289974A (en) * 2018-12-10 2020-06-16 中铁一局集团有限公司 Tunnel full-section rack method rapid detection system and operation control method thereof
CN111833363B (en) * 2019-04-17 2023-10-24 南开大学 Image edge and saliency detection method and device
CN110111711A (en) * 2019-04-30 2019-08-09 京东方科技集团股份有限公司 The detection method and device of screen, computer readable storage medium
CN111325724B (en) * 2020-02-19 2023-06-09 石家庄铁道大学 Tunnel crack region detection method and device
CN112668396A (en) * 2020-12-03 2021-04-16 浙江大华技术股份有限公司 Two-dimensional false target identification method, device, equipment and medium
CN114708226A (en) * 2022-04-01 2022-07-05 南通蓝城机械科技有限公司 Copper pipe inner wall crack detection method based on illumination influence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175701A (en) * 2011-02-11 2011-09-07 王慧斌 System and method for online flaw detection of industrial X-ray machine
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN103871053A (en) * 2014-02-25 2014-06-18 苏州大学 Vision conspicuousness-based cloth flaw detection method
CN104463847A (en) * 2014-08-05 2015-03-25 华南理工大学 Ink and wash painting characteristic rendering method
CN105510350A (en) * 2016-01-28 2016-04-20 北京工业大学 Tunnel surface image acquisition device and tunnel surface detection equipment
CN105913413A (en) * 2016-03-31 2016-08-31 宁波大学 Objective colorful image quality evaluation method based on online manifold learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175701A (en) * 2011-02-11 2011-09-07 王慧斌 System and method for online flaw detection of industrial X-ray machine
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN103871053A (en) * 2014-02-25 2014-06-18 苏州大学 Vision conspicuousness-based cloth flaw detection method
CN104463847A (en) * 2014-08-05 2015-03-25 华南理工大学 Ink and wash painting characteristic rendering method
CN105510350A (en) * 2016-01-28 2016-04-20 北京工业大学 Tunnel surface image acquisition device and tunnel surface detection equipment
CN105913413A (en) * 2016-03-31 2016-08-31 宁波大学 Objective colorful image quality evaluation method based on online manifold learning

Also Published As

Publication number Publication date
CN107220962A (en) 2017-09-29

Similar Documents

Publication Publication Date Title
CN107220962B (en) Image detection method and device for tunnel cracks
CN107146217B (en) Image detection method and device
EP3176751B1 (en) Information processing device, information processing method, computer-readable recording medium, and inspection system
CN110263920B (en) Convolutional neural network model, training method and device thereof, and routing inspection method and device thereof
CN114418957A (en) Global and local binary pattern image crack segmentation method based on robot vision
CN116168026A (en) Water quality detection method and system based on computer vision
Selvakumar et al. The performance analysis of edge detection algorithms for image processing
CN107274452B (en) Automatic detection method for acne
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN113706523A (en) Method for monitoring belt deviation and abnormal operation state based on artificial intelligence technology
CN109993744B (en) Infrared target detection method under offshore backlight environment
CN117037132A (en) Ship water gauge reading detection and identification method based on machine vision
CN111460917A (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN111815660A (en) Method and device for detecting edges of goods in hazardous chemical warehouse and terminal equipment
CN114842213A (en) Obstacle contour detection method and device, terminal equipment and storage medium
CN115083008A (en) Moving object detection method, device, equipment and storage medium
Othman et al. Road crack detection using adaptive multi resolution thresholding techniques
Bisht et al. Integration of hough transform and inter-frame clustering for road lane detection and tracking
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN112329677A (en) Remote sensing image river target detection method and device based on feature fusion
CN111754491A (en) Picture definition judging method and device
CN114742849B (en) Leveling instrument distance measuring method based on image enhancement
Tiwari et al. A blind blur detection scheme using statistical features of phase congruency and gradient magnitude
CN112329674B (en) Icing lake detection method and device based on multi-texture feature fusion
CN115187918B (en) Method and system for identifying moving object in monitoring video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant