CN108510453B - Intelligent traffic monitoring image deblurring method based on visual attention mechanism - Google Patents

Intelligent traffic monitoring image deblurring method based on visual attention mechanism Download PDF

Info

Publication number
CN108510453B
CN108510453B CN201810188142.3A CN201810188142A CN108510453B CN 108510453 B CN108510453 B CN 108510453B CN 201810188142 A CN201810188142 A CN 201810188142A CN 108510453 B CN108510453 B CN 108510453B
Authority
CN
China
Prior art keywords
image
value
traffic monitoring
matrix
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810188142.3A
Other languages
Chinese (zh)
Other versions
CN108510453A (en
Inventor
赵雪青
石美红
朱欣娟
高全力
师昕
白新国
薛文生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN201810188142.3A priority Critical patent/CN108510453B/en
Publication of CN108510453A publication Critical patent/CN108510453A/en
Application granted granted Critical
Publication of CN108510453B publication Critical patent/CN108510453B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent traffic monitoring image deblurring method based on a visual attention mechanism, which comprises the following steps: step 1, generating a saliency map of an original traffic monitoring image, and converting a blurred original traffic monitoring image from an RGB color space to an HSI color space; then according to the scene information of the image, maximizing the scene information to obtain a saliency map; step 2, carrying out image segmentation by utilizing the contour features and the texture features of the saliency map to obtain a segmentation map of the saliency map; and 3, deblurring the segmentation image, and deblurring the segmentation image of the saliency map by adopting a structure information diffusion function to finally obtain a deblurred clear image. The method of the invention has simple steps, small occupied memory space and obvious deblurring effect.

Description

Intelligent traffic monitoring image deblurring method based on visual attention mechanism
Technical Field
The invention belongs to the technical field of image deblurring processing, and relates to an intelligent traffic monitoring image deblurring method based on a visual attention mechanism.
Background
With the increasing level of intelligence of traffic monitoring and traffic management, people pay more and more attention to intelligent video monitoring technology based on traffic monitoring image processing, analysis and understanding. However, in the actual processes of shooting, transmitting and storing, etc., the traffic monitoring image is affected by factors such as imaging equipment, environment, noise, etc., and image blur is caused, wherein most commonly, the traffic monitoring camera is subjected to image motion blur caused by relative motion between the camera and the object to be shot when being exposed, and the distance between the object to be shot and the optical center of the camera is not appropriate, so that the image defocus blur is caused, and these blurs can cause loss of important detail information of the traffic monitoring image, and seriously affect the level of traffic monitoring and management intelligence.
With the rapid increase of the holding amount of global automobiles and the improvement of safety awareness of people, the intelligent traffic supervision system plays an important role anytime and anywhere and is used for ensuring the safe passage of roads and preventing emergency situations. However, as the scale of image data becomes more and more huge, screening and processing of massive traffic monitoring image information becomes more and more difficult while human beings can acquire unprecedented abundant resources, and the traditional image deblurring processing method has difficulty in achieving ideal effects. Therefore, how to rapidly screen and remove image blur to provide a high-quality traffic monitoring image becomes a problem to be researched and solved urgently.
Human eyes are used as a perception terminal of visual image information, and an advanced visual information processing system formed through long-term evolution can efficiently and accurately process input image information, has natural selection capability on the input visual information, can quickly and accurately judge in a ever-changing scene, concentrates the attention on interested important information, and further performs detailed analysis and interpretation. Therefore, the method for deblurring the intelligent traffic monitoring image and improving the image quality by means of the human visual attention mechanism have important practical significance.
Disclosure of Invention
The invention aims to provide an intelligent traffic monitoring image deblurring method based on a visual attention mechanism, which solves the problems that in the prior art, the screening and processing of massive traffic monitoring image information are difficult, the rapid screening and image blurring removal are difficult to provide a high-quality traffic monitoring image, and the intelligentization level of traffic monitoring and management is low.
The technical scheme adopted by the invention is that an intelligent traffic monitoring image deblurring method based on a visual attention mechanism is implemented according to the following steps:
step 1, generating a saliency map of an original traffic monitoring image,
converting a blurred original traffic monitoring image from an RGB color space to an HSI color space; then according to the scene information of the image, maximizing the scene information to obtain a saliency map;
step 2, carrying out image segmentation by utilizing the contour features and the texture features of the saliency map to obtain a segmentation map of the saliency map;
step 3, deblurring processing is carried out on the segmentation image,
and carrying out deblurring processing on the segmentation image of the saliency map by adopting a structure information diffusion function, and finally obtaining a deblurred clear image.
The method has the advantages that the maximum scene information is adopted to obtain the saliency map of the original traffic monitoring image, the outline characteristics and the texture characteristics of the saliency map are utilized to carry out image segmentation, the structure information diffusion function is adopted to carry out deblurring processing, and finally the deblurred image is output. The method has the advantages of simple method, removing the traffic monitoring image blur by means of a human visual attention mechanism and the like, and can be used for processing the general blurred color image.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2a is an image before the traffic monitoring image blur is removed by the present invention;
FIG. 2b is a saliency map of a traffic surveillance image before being deblurred using the present invention;
FIG. 3a is a segmentation map resulting from image segmentation using the present invention;
FIG. 3b is an edge map obtained by image segmentation using the present invention;
FIG. 3c is a graph of energy obtained from image segmentation using the present invention;
FIG. 4a is an image after the invention is adopted to remove the blurring of the traffic monitoring image;
FIG. 4b is a saliency map of a traffic surveillance image after the inventive deblurring is employed;
FIG. 5a is an image before a general color image blur is removed by the present invention;
FIG. 5b is a saliency map of a generic color image before being deblurred using the present invention;
FIG. 6a is a segmentation map resulting from image segmentation using the present invention;
FIG. 6b is an edge map obtained by image segmentation using the present invention;
FIG. 6c is a graph of energy obtained from image segmentation using the present invention;
FIG. 7a is an image after removing the blur of a general color image by using the present invention;
FIG. 7b is a saliency map after the general color image blur is removed using the present invention;
fig. 8 is a step chart of the main process of image deblurring conversion in embodiment 1 of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to an intelligent traffic monitoring image deblurring method based on a visual attention mechanism, which is implemented according to the following steps:
step 1, generating a saliency map of an original traffic monitoring image, and converting a blurred original traffic monitoring image from an RGB color space to an HSI color space which is more in line with a human visual perception system; then according to the scene information of the image, maximizing the scene information to obtain a saliency map,
the calculation formula for converting the RGB color space of the original traffic monitoring image into the HSI color space is as follows:
Figure BDA0001590922190000041
Figure BDA0001590922190000042
Figure BDA0001590922190000043
the method comprises the following specific steps of obtaining a saliency map of an image by maximizing scene information:
1.1) calculating a basis vector aiming at an original traffic monitoring image by adopting an independent component analysis method, extracting the internal characteristics of an image X after converting a color space,
the calculation formula of the independent component analysis method is X ═ AS, wherein the image X is formed by mixing the independent component S and the mixing matrix A, and X ═ X1(t),x2(t),…,xn(t)}TT is a time sample, S ═ S1,s2,…,sn}TWhere a is (mxn) a mixing matrix; further calculating the inverse matrix A of the matrix A-1The inverse solution matrix separates the components of the local area pixel matrix into independent components; and then the image is subjected to blocking processing, and the local area pixel matrix is multiplied by the inverse matrix to obtain a base vector W of the local area pixel matrix, wherein the base vector W is { W ═ W }1,w2,…,wn};
1.2) carrying out likelihood estimation on each component of the basis vector W by using Gaussian kernel density estimation, wherein the calculation formula for the local area pixel block is as follows:
Figure BDA0001590922190000044
in the formula (1), the p function is a Gaussian kernel density estimated value of the local image block,
wherein, σ is a scale factor, the pixel block size of the local area is j × k, and the value is 5 × 5 here; psi denotes the entire processed image, since the components W of the vector WiAre independent of each other, wherein the component wiHas a value of vi(ii) a ω (s, t) is a weight of a gaussian function in the kernel density function, which is estimated from the probability of basis coefficients of the current local image, and is calculated as follows:
Figure BDA0001590922190000051
1.3) obtaining the saliency map by calculating the self-information content of the pixel blocks of the local area, wherein the calculation formula of the self-information content of the pixel blocks of the local area is as follows:
Figure BDA0001590922190000052
wherein, i (x) is the self-information content of the local area pixel block, and p (x) is the gaussian kernel density estimation value of the local image block in the step 1.2);
and 2, carrying out image segmentation by using the contour features and the texture features of the saliency map to obtain a segmentation map of the saliency map, wherein the method specifically comprises the following steps:
2.1) aiming at the saliency map extracted in the step 1, randomly selecting K objects as initial clustering points v1,v2,...vkCalculating the distance between each object and each cluster center, selecting the closest cluster center to distribute all objects, and calculating the cluster by each cluster center according to the current object, wherein the calculation formula is as follows:
Figure BDA0001590922190000053
wherein x isi(ki+1) is the ith object,kiRepresenting the total number of clustering objects in which the ith object is positioned;
2.2) carrying out recursive operation by using a K mean value clustering method, and dividing the whole image into small and continuous K image block regions;
2.3) converting the image block area obtained in the step 2.2) into an undirected weighted graph Gc=(Vc,Ec;Wc) Then, calculating a weight matrix W of the image block region, wherein the calculation formula is as follows:
Figure BDA0001590922190000061
wherein, WijRepresenting the weight value between the node i and the node j in the undirected weighted graph G, and representing the relation between the areas i and j in the image; f (i) and F (j) respectively represent the i-th area viAnd the jth region vjThe gray value of (a); sigmaIAnd σVTo adjust the parameters; | | V (i) -V (j) | non-woven phosphor2The distance is Manhattan distance, r is the distance between data and the centroid, and is obtained through self-adaptive calculation;
2.4) calculation of the characteristic equation (D)c-Wc)yc=λDcyc,The eigenvalue and eigenvector in (a),
wherein D isc-WcIs a Laplace matrix, Wc(i,j)=wc(i,j),Dc(i,j)=∑jwc(i,j);
ycTo indicate a vector, represent ycEach element in (a) represents a region;
2.5) segmenting the image, using the second minimum feature vector in step 2.4) to segment the image into two parts, YcElements in the region larger than 0 are divided into a group, YcElements of less than 0 region are divided into another group;
2.6) recursively invoking the step 2.4) and the step 2.5) to obtain the segmented image.
Step 3, deblurring processing is carried out on the segmentation image,
the method adopts a structure information diffusion function to carry out deblurring processing on the segmentation image of the saliency map, and the calculation formula is as follows:
Figure BDA0001590922190000062
wherein, i and j are respectively the position coordinates of the image pixel points, and the truncation error is O (tau + h)2) In the above formula, the time discrete step τ is preferably 5, the space discrete step h is preferably 400, the iteration number n is preferably 10,
in addition, (I)x)i,j=(2(Ii+1,j-Ii-1,j)+Ii+1,j+1-Ii-1,j+1+Ii+1,j-1-Ii-1,j-1)/4,
(Iy)i,j=(2(Ii,j+1-Ii,j-1)+Ii+1,j+1-Ii+1,j-1+Ii-1,j+1-Ii-1,j-1)/4;
GαFor a Gaussian kernel function, the calculation is:
Figure BDA0001590922190000071
wherein alpha is a scale parameter, and the preferred value is 1;
the diffusion function g (| t |) is calculated as:
Figure BDA0001590922190000072
wherein the content of the first and second substances,
Figure BDA0001590922190000073
the calculation formula is as follows:
Figure BDA0001590922190000074
wherein, the optimized value of the quantitative parameter k in the reaction item f (I) is 2, mu is the mean value, mu1A value of 13, v1Value 45, mu2A value of 68, v2Value 125, mu3A value of 205;
SFijfor the structure information function, the calculation is as follows:
Figure BDA0001590922190000075
wherein the content of the first and second substances,
Figure BDA0001590922190000076
is the gradient of any pixel in the image g,
and finally obtaining a deblurred clear image through the calculation, namely the deblurred clear image.
The following examples all use Matlab R2017a to program the methods described in the present invention. And (3) experimental platform configuration: the operating system is Windows10, the CPU is Intel Core i 75600U, and the RAM is 8G.
Example 1
Referring to fig. 8, taking the example of removing the traffic monitoring image blur, the specific steps are as follows:
step 1, generating a saliency map of an original traffic monitoring image.
As shown in fig. 2, a blurred original image for traffic monitoring is obtained by first converting the RGB color space of the original image into the HSI color space, and calculating the following formula:
Figure BDA0001590922190000081
Figure BDA0001590922190000082
Figure BDA0001590922190000083
then, the internal features of the image X after the color space conversion are extracted, and the calculation formula X is the inverse matrix A of the matrix A in AS-1The inverse solution matrix separates the components of the local area pixel matrix into independent components. The image is processed by block division, and the local area pixel matrix is multiplied by the inverse matrixThen obtaining a base vector W ═ W of the pixel matrix of the local area1,w2,…,wn}. And performing likelihood estimation on each component of the basis vector W by using Gaussian kernel density estimation, wherein the calculation formula is as follows:
Figure BDA0001590922190000084
the pixel block size of the local area is j multiplied by k, and the value is 5 multiplied by 5; psi denotes the entire processed image, since the components W of the vector WiAre independent of each other, wherein the component wiHas a value of viAnd ω (s, t) is a weight of a gaussian function in the kernel density function, which is estimated from the probability of basis coefficients of the current local image, and is calculated as follows:
Figure BDA0001590922190000085
and finally, calculating the self information content of the pixel blocks in the local area to obtain the saliency map, wherein the calculation formula is as follows:
Figure BDA0001590922190000086
step 2, carrying out image segmentation by utilizing the contour features and the texture features of the saliency map,
firstly, carrying out recursive operation by using a K-means clustering method, dividing the whole image into small and continuous K image block regions, wherein the calculation formula is as follows:
Figure BDA0001590922190000087
wherein x isi(ki+1) is the ith object, kiRepresenting the total number of clustering objects in which the ith object is positioned;
secondly, the image is divided, and the undirected weighted graph G of the image block region divided in the computation stepc=(Vc,Ec;Wc) The weight matrix W in (1) is,the calculation formula is as follows:
Figure BDA0001590922190000091
wherein, WijRepresenting the weight value between the node i and the node j in the undirected weighted graph G, and representing the relation between the areas i and j in the image; f (i) and F (j) respectively represent the i-th area viAnd the jth region vj| V (i) -V (j) | ceiling2Is Manhattan distance, σIAnd σV58,128 for adjusting the parameters, respectively;
calculation of characteristic equation (D)c-Wc)yc=λDcyc,The feature value and the feature vector of (c),
wherein, Wc(i,j)=wc(i,j),Dc(i,j)=∑jwc(i,j);ycTo indicate a vector, represent ycEach element in (a) represents a region; dc-WcIs a Laplace matrix; calculating to obtain YcElements in the region larger than 0 are divided into a group, YcThe elements of the area smaller than 0 are divided into another group for image segmentation.
Step 3, deblurring processing is carried out on the segmentation image, and the calculation formula is as follows:
Figure BDA0001590922190000092
i and j are respectively the position coordinates of the image pixel points; the truncation error is O (τ + h)2) The time discrete step τ is preferably 5, the space discrete step h is preferably 400, the iteration number n is preferably 10,
in addition, (I)x)i,j=(2(Ii+1,j-Ii-1,j)+Ii+1,j+1-Ii-1,j+1+Ii+1,j-1-Ii-1,j-1)/4,
(Iy)i,j=(2(Ii,j+1-Ii,j-1)+Ii+1,j+1-Ii+1,j-1+Ii-1,j+1-Ii-1,j-1)/4,
GαFor a Gaussian kernel function, the calculation is as follows:
Figure BDA0001590922190000093
wherein α is a scale parameter, and the value is 1, and the diffusion function g (| t |) is obtained by the following calculation formula:
Figure BDA0001590922190000101
wherein the content of the first and second substances,
Figure BDA0001590922190000102
calculated as follows:
Figure BDA0001590922190000103
in the above formula, the quantitative parameter k in the reaction term f (I) is 2, mu is the mean value, mu1The value is 13, mu2The value is 68, mu3Value 205, v1Value 45, v2A value of 125;
SFijas a function of the structure information, the following calculation formula is used:
Figure BDA0001590922190000104
wherein the content of the first and second substances,
Figure BDA0001590922190000105
the gradient of any pixel point in the image g is obtained to obtain the image after the blur is removed,
outputting the image after the blur is removed, as shown in fig. 4, wherein the license plate number edge information of the automobile in the image of fig. 4a is prominent, and the image quality is obviously improved; the salient region in the image of fig. 4b is more salient than that in fig. 2b, and the visual effect is better.
Example 2
For example, the method for removing blur of a general color image comprises the following specific steps:
in this embodiment, fig. 5 is an image before the color image blur is removed by the present invention. Step 1 of calculating saliency map of blurred color image is the same as that of embodiment 1, and the image before blurring removal and the saliency map thereof are respectively shown in fig. 5a and 5 b.
In step 2, the parameter values of image segmentation by using the contour features and texture features of the saliency map are the same as those in embodiment 1, and a segmentation map, an edge map, and an energy map obtained by image segmentation are respectively shown in fig. 6a, fig. 6b, and fig. 6 c.
In step 3, the blur of the divided image is removed, the process in step 3 is the same as that in embodiment 1, the image after blur removal and the saliency map result thereof are shown in fig. 7a and 7b, the image edge information in fig. 7a is prominent, the image quality is obviously improved, the saliency region in fig. 7b is more prominent than that in fig. 5b, and the visual effect is better.

Claims (2)

1. An intelligent traffic monitoring image deblurring method based on a visual attention mechanism is characterized by comprising the following steps:
step 1, generating a saliency map of an original traffic monitoring image,
converting a blurred original traffic monitoring image from an RGB color space to an HSI color space; then according to the scene information of the image, maximizing the scene information to obtain a saliency map;
the specific process is as follows:
1.1) calculating a basis vector aiming at an original traffic monitoring image by adopting an independent component analysis method, extracting the internal characteristics of an image X after converting a color space,
the calculation formula of the independent component analysis method is X ═ AS, wherein the image X is formed by mixing the independent component S and the mixing matrix A, and X ═ X1(t),x2(t),…,xn(t)}TT is a time sample, S ═ S1,s2,…,sn}TWhere a is (mxn) a mixing matrix; further calculating the inverse matrix A of the matrix A-1Relieving inverse syndromeThe matrix separates the components of the local area pixel matrix into independent components; and then the image is subjected to blocking processing, and the local area pixel matrix is multiplied by the inverse matrix to obtain a base vector W of the local area pixel matrix, wherein the base vector W is { W ═ W }1,w2,…,wn};
1.2) carrying out likelihood estimation on each component of the basis vector W by using Gaussian kernel density estimation, wherein the calculation formula for the local area pixel block is as follows:
Figure FDA0003039463460000011
in the formula (1), the p function is a Gaussian kernel density estimated value of the local image block,
wherein, sigma is a scale factor, and the pixel block size of the local area is j multiplied by k; psi denotes the entire processed image, since the components W of the vector WiAre independent of each other, wherein the component wiHas a value of vi(ii) a ω (s, t) is a weight of a gaussian function in the kernel density function, which is estimated from the probability of basis coefficients of the current local image, and is calculated as follows:
Figure FDA0003039463460000021
1.3) obtaining the saliency map by calculating the self-information content of the pixel blocks of the local area, wherein the calculation formula of the self-information content of the pixel blocks of the local area is as follows:
Figure FDA0003039463460000022
wherein, i (x) is the self-information content of the local area pixel block, and p (x) is the gaussian kernel density estimation value of the local image block in the step 1.2);
step 2, carrying out image segmentation by utilizing the contour features and the texture features of the saliency map to obtain a segmentation map of the saliency map;
the specific process is as follows:
2.1) for step1, randomly selecting K objects as initial clustering points v1,v2,...vkCalculating the distance between each object and each cluster center, selecting the closest cluster center to distribute all objects, and calculating the cluster by each cluster center according to the current object, wherein the calculation formula is as follows:
Figure FDA0003039463460000023
wherein x isi(ki+1) is the ith object, kiRepresenting the total number of clustering objects in which the ith object is positioned;
2.2) carrying out recursive operation by using a K mean value clustering method, and dividing the whole image into small and continuous K image block regions;
2.3) converting the image block area obtained in the step 2.2) into an undirected weighted graph Gc=(Vc,Ec;Wc) Then, calculating a weight matrix W of the image block region, wherein the calculation formula is as follows:
Figure FDA0003039463460000024
wherein, WijRepresenting the weight value between the node i and the node j in the undirected weighted graph G, and representing the relation between the areas i and j in the image; f (i) and F (j) respectively represent the i-th area viAnd the jth region vjThe gray value of (a); sigmaIAnd σVTo adjust the parameters; | | V (i) -V (j) | non-woven phosphor2The distance is Manhattan distance, r is the distance between data and the centroid, and is obtained through self-adaptive calculation;
2.4) calculation of the characteristic equation (D)c-Wc)yc=λDcyc,The eigenvalue and eigenvector in (a),
wherein D isc-WcIs a Laplace matrix, Wc(i,j)=wc(i,j),Dc(i,j)=∑jwc(i,j);
ycTo indicate a vector, represent ycEach element in (a) represents a region;
2.5) segmenting the image, using the second minimum feature vector in step 2.4) to segment the image into two parts, YcElements in the region larger than 0 are divided into a group, YcElements of less than 0 region are divided into another group;
2.6) recursively calling the step 2.4) and the step 2.5) to obtain a segmented image;
step 3, deblurring processing is carried out on the segmentation image,
deblurring the segmentation image of the saliency map by adopting a structure information diffusion function to finally obtain a deblurred clear image,
the specific process is as follows:
the computational formula for the deblurring process is:
Figure FDA0003039463460000031
wherein, i and j are respectively the position coordinates of the image pixel points, and the truncation error is O (tau + h)2) In the above formula, the time discrete step is tau, the space discrete step is h, the iteration number is n,
in addition, (I)x)i,j=(2(Ii+1,j-Ii-1,j)+Ii+1,j+1-Ii-1,j+1+Ii+1,j-1-Ii-1,j-1)/4,
(Iy)i,j=(2(Ii,j+1-Ii,j-1)+Ii+1,j+1-Ii+1,j-1+Ii-1,j+1-Ii-1,j-1)/4;
GαFor a Gaussian kernel function, the calculation is:
Figure FDA0003039463460000041
wherein alpha is a scale parameter; in the formula (6), the first and second groups,
Figure FDA0003039463460000042
for the diffusion term, the calculation of the function g (| t |) is:
Figure FDA0003039463460000043
wherein the content of the first and second substances,
Figure FDA0003039463460000044
the reaction term is calculated as follows:
Figure FDA0003039463460000045
wherein, in the reaction terms f (I), k is a quantitative parameter, mu is a mean value, mu1、μ2、μ3、v1、v2Are all parameters;
SFijfor a structure information function, the calculation formula of any pixel point g in the blurred image I is as follows:
Figure FDA0003039463460000046
wherein the content of the first and second substances,
Figure FDA0003039463460000047
the gradient of any pixel point g in the image.
2. The intelligent traffic monitoring image deblurring method based on the visual attention mechanism according to claim 1, wherein the time discrete step τ is 5, the space discrete step h is 400, and the iteration number n is 10;
the scale parameter alpha takes a value of 1, the quantization parameter k takes a value of 2, mu1The value is 13, mu2Value of 68,μ3Value 205, v1Value 45, v2The value is 125.
CN201810188142.3A 2018-03-07 2018-03-07 Intelligent traffic monitoring image deblurring method based on visual attention mechanism Expired - Fee Related CN108510453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810188142.3A CN108510453B (en) 2018-03-07 2018-03-07 Intelligent traffic monitoring image deblurring method based on visual attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810188142.3A CN108510453B (en) 2018-03-07 2018-03-07 Intelligent traffic monitoring image deblurring method based on visual attention mechanism

Publications (2)

Publication Number Publication Date
CN108510453A CN108510453A (en) 2018-09-07
CN108510453B true CN108510453B (en) 2021-06-29

Family

ID=63377096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810188142.3A Expired - Fee Related CN108510453B (en) 2018-03-07 2018-03-07 Intelligent traffic monitoring image deblurring method based on visual attention mechanism

Country Status (1)

Country Link
CN (1) CN108510453B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472086B (en) * 2019-08-02 2023-01-31 西安工程大学 Skeleton image retrieval method based on retina key feature extraction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514582A (en) * 2012-06-27 2014-01-15 郑州大学 Visual saliency-based image deblurring method
CN103884431A (en) * 2013-12-31 2014-06-25 华中科技大学 Infrared imaging detection and positioning method of underground building in plane surface environment
CN104616248A (en) * 2014-11-20 2015-05-13 杭州电子科技大学 Single image deblurring method combined with margin analysis and total variation
CN106097256A (en) * 2016-05-31 2016-11-09 南京邮电大学 A kind of video image fuzziness detection method based on Image Blind deblurring
CN107194927A (en) * 2017-06-13 2017-09-22 天津大学 The measuring method of stereo-picture comfort level chromaticity range based on salient region
CN107240119A (en) * 2017-04-19 2017-10-10 北京航空航天大学 Utilize the method for improving the fuzzy clustering algorithm extraction uneven infrared pedestrian of gray scale

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8750645B2 (en) * 2009-12-10 2014-06-10 Microsoft Corporation Generating a composite image from video frames

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514582A (en) * 2012-06-27 2014-01-15 郑州大学 Visual saliency-based image deblurring method
CN103884431A (en) * 2013-12-31 2014-06-25 华中科技大学 Infrared imaging detection and positioning method of underground building in plane surface environment
CN104616248A (en) * 2014-11-20 2015-05-13 杭州电子科技大学 Single image deblurring method combined with margin analysis and total variation
CN106097256A (en) * 2016-05-31 2016-11-09 南京邮电大学 A kind of video image fuzziness detection method based on Image Blind deblurring
CN107240119A (en) * 2017-04-19 2017-10-10 北京航空航天大学 Utilize the method for improving the fuzzy clustering algorithm extraction uneven infrared pedestrian of gray scale
CN107194927A (en) * 2017-06-13 2017-09-22 天津大学 The measuring method of stereo-picture comfort level chromaticity range based on salient region

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Image Deblurring Based On Visual Saliency;Bing Zhou 等;《2012 International Conference on Systems and Informatics (ICSAI 2012)》;20120625;第1919-1922页 *
基于图论的谱聚类算法研究和实现;郑杨帆;《中国优秀硕士学位论文全文数据库信息科技辑》;20130515(第5期);第I138-17页 *
基于视觉显著性的运动图像去模糊研究;王维哲等;《计算机工程与设计》;20140831;第35卷(第8期);第2827-2855页 *
局部加权全变差下的盲去模糊;吴晓旭等;《计算机辅助设计与图形学学报》;20141231;第26卷(第12期);第2173-2181页 *
视觉注意力计算模型的研究;韩鹏;《中国优秀硕士学位论文全文数据库信息科技辑》;20160815(第8期);第I138-719页 *

Also Published As

Publication number Publication date
CN108510453A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
Chen et al. Visual depth guided color image rain streaks removal using sparse coding
CN110163808B (en) Single-frame high-dynamic imaging method based on convolutional neural network
CN111861925B (en) Image rain removing method based on attention mechanism and door control circulation unit
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN110807757B (en) Image quality evaluation method and device based on artificial intelligence and computer equipment
CN107133923B (en) Fuzzy image non-blind deblurring method based on adaptive gradient sparse model
CN110176024B (en) Method, device, equipment and storage medium for detecting target in video
Chen et al. An effective subsuperpixel-based approach for background subtraction
CN111462019A (en) Image deblurring method and system based on deep neural network parameter estimation
CN110533632B (en) Image blurring tampering detection method and device, computer equipment and storage medium
Zhao et al. Correlation maximized structural similarity loss for semantic segmentation
CN115861380B (en) Method and device for tracking visual target of end-to-end unmanned aerial vehicle under foggy low-illumination scene
Ahn et al. EAGNet: Elementwise attentive gating network-based single image de-raining with rain simplification
CN112862753A (en) Noise intensity estimation method and device and electronic equipment
CN112419191A (en) Image motion blur removing method based on convolution neural network
CN112509144A (en) Face image processing method and device, electronic equipment and storage medium
CN108510453B (en) Intelligent traffic monitoring image deblurring method based on visual attention mechanism
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
CN111369477A (en) Method for pre-analysis and tool self-adaptation of video recovery task
CN111126185A (en) Deep learning vehicle target identification method for road intersection scene
CN111353982A (en) Depth camera image sequence screening method and device
CN116245772A (en) Low-illumination unmanned aerial vehicle aerial image enhancement method and device
Ouyang Total variation constraint GAN for dynamic scene deblurring
Tusher et al. An Enhanced Variational AutoEncoder Approach for the Purpose of Deblurring Bangla License Plate Images
Conti et al. A regularized deep learning approach for image de-blurring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210629