CN107168930B - Close frame group association domain calculation method - Google Patents

Close frame group association domain calculation method Download PDF

Info

Publication number
CN107168930B
CN107168930B CN201710356482.8A CN201710356482A CN107168930B CN 107168930 B CN107168930 B CN 107168930B CN 201710356482 A CN201710356482 A CN 201710356482A CN 107168930 B CN107168930 B CN 107168930B
Authority
CN
China
Prior art keywords
layer
calculating
point
coefficient
line detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710356482.8A
Other languages
Chinese (zh)
Other versions
CN107168930A (en
Inventor
闫敬文
袁振国
王宏志
陈宏达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN201710356482.8A priority Critical patent/CN107168930B/en
Publication of CN107168930A publication Critical patent/CN107168930A/en
Application granted granted Critical
Publication of CN107168930B publication Critical patent/CN107168930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method for calculating a tight frame group associated domain, which is characterized in that a line detection operator is introduced into the calculation of the associated domain, so that the time loss of the conventional tight frame group transform is obviously reduced, the transformation efficiency is improved, the application range of the tight frame group transform in the image processing direction is widened, and the method has important practical significance on the premise of not obviously losing the sparsity of a coefficient matrix and perfectly reconstructing an original image.

Description

Close frame group association domain calculation method
Technical Field
The invention relates to an image processing method, in particular to a calculation method of a tight frame group associated domain.
Background
Several lines can draw a figure or texture with obvious characteristics. Natural images generally contain regions composed of local directional structures. The key to image processing is to be able to better represent the complex geometric information in the image and to obtain sparse representation of the image.
The wavelet transform can adaptively adjust the resolution in image processing according to the regularity of a partial image, and thus it is particularly efficient for representation of an image. However, wavelet basis is not optimal on images representing geometric structures because its square support cannot adaptively represent directional geometric properties. Curvelets, Contourlets, Curvelets, Bnadlets, Wedgelets were successively proposed to improve this problem and achieve good results in certain image processing applications, however their improvement in practical image processing applications is not as good as theoretically expected, possibly because the texture of these pictures is too complex to represent the image well. To overcome this drawback, Mallat proposed a Grouplet transform in 2008. The image geometric feature transformation method is a novel transformation, the basis of which can be changed along with the change of the geometric structure of the image under different scales, and therefore, the geometric feature of the image can be utilized to the maximum extent. Meanwhile, the calculation of the Grouplet transformation is simple, and the transformation method of the Grouplet transformation is a quick calculation method.
The tight-frame Grouplet decomposition involves the computation of two layers, the associated domain layer and the coefficient layer. The finding of the associated domain in the tight-frame Grouplet transform has a great influence on the performance of the transform. The method has the advantages that the method is discretized and can accurately reflect the change of each pixel point, but the grid needs to be planned in advance, so that the direction cannot be selected in a self-adaptive mode according to the image structure, and images containing complex textures cannot be well represented.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method for calculating a tight-frame group associated domain. The high complexity brought by a Block Matching algorithm in the original transformation can be reduced, and the efficiency of the tight-frame group transformation is improved.
In order to solve the above technical problem, an embodiment of the present invention provides a method for calculating a tight-frame group associated domain, including the following steps:
1) converting an input image into a gray-scale image;
2) decomposing the gray level graph group, calculating the layer coefficient of the jth layer, calculating the optimal matching point through a line detection operator, and calculating the layer coefficient of the jth layer associated domain, wherein J is more than or equal to 1 and less than or equal to J;
3) and repeating the step 2) until the coefficients of the J layers are calculated, wherein J is a given empirical value.
Further, the line detection operator template uses a 3 × 3 matrix.
Further, the formula for the line detection operator to calculate the optimal matching point is as follows:
Figure GDA0002670694360000021
wherein
Figure GDA0002670694360000022
So as to make
Figure GDA0002670694360000023
A 3 × 3 image data matrix after central binarization; OP (optical fiber)i[j]Is a line detection operator that detects the direction i, i represents one direction in the set { +45 °,0 °, -45 ° }; the symbol j denotes the jth position of the corresponding matrix.
Further, the step of calculating the optimal matching point by the line detection operator is as follows:
1) at the j-th layer, in dots
Figure GDA0002670694360000024
As the center, a data matrix BP with the size of 3 multiplied by 3 and formed by pixel values is selected,
2) binarizing the data matrix BP3×3Memory matrix BP3×3When the average value of (B) is Av, then when BP isx,yBP being less than said mean value Avx,yThe value is 0, otherwise the value is 1,
3) calculating a formula solution point for the optimal matching point using the line detection operator
Figure GDA0002670694360000025
The optimal matching point m is a point m,
4) computing associative domain layer coefficients
Figure GDA0002670694360000026
Coefficient of sum layer system { dj[m],aJ[m]}1≤j≤JWherein
Correlation domain layer series:
Figure GDA0002670694360000027
as J increases from 1 to J, the coefficient layer coefficient values are updated by the following equation:
Figure GDA0002670694360000028
Figure GDA0002670694360000029
Figure GDA00026706943600000210
Figure GDA00026706943600000211
wherein, a [ m ]]Denotes the mean coefficient at point m, when j is 1, a [ m [ ]]Is the pixel value at point m; dj[m]Represents the difference coefficient at point m; s [ m ]]Denotes the size of the support frame at point m, when j is 1, s [ m ]]1. When J is J, aJ[m]By the formula
Figure GDA0002670694360000031
Is calculated to obtain
5) And repeating the steps until all the points of the 1 to J layers are matched.
The embodiment of the invention has the following beneficial effects: the method reduces the high complexity caused by the Block Matching algorithm, greatly saves the conversion time and improves the efficiency when being applied to the tight frame group conversion.
Drawings
FIG. 1 is a schematic view of the flow structure of the present invention;
FIG. 2 shows the detection templates at-45 °,0 °, +45 °;
FIG. 3 is a comparison of the effects of the line detection algorithm and the Block Matching algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
A flow chart as shown in fig. 1.
The method for calculating the close frame group associated domain comprises the following steps: inputting an image and converting the image into a gray level image I; performing Grouplet decomposition, calculating j-layer coefficient layer coefficients, calculating optimal matching points through a line detection operator, and calculating j-layer associated domain layer coefficients; and thirdly, repeating the step II until the J layer coefficients are calculated.
For a gray level image I with the size of M multiplied by N, a J-layer tight frame group associated domain layer coefficient A is obtained after a line detection operator algorithmjAnd (J +1) coefficient of coefficient layer { dj[m],aJ[m]}1≤j≤J
Description of the drawings: (1) OB _ N45: a-45 ° detection template matrix of size 3X 3; OB _ 0: a 0 ° detection template matrix of size 3X 3; OB _ P45: a +45 ° detection template matrix of size 3X 3; (2) qm,nPoint Q representing the m-th row, n-th column, assuming the current point is Pm,nThen the point to be matched is Qk,(n-1)Where k ∈ { m-1, m, m +1}, the points from top to bottom correspond to the template matrices OB _ N45, OB _0, and OB _ P45, in that order. (3) J is the total number of layers of the compact frame group transform decomposition; set { d }j[m],aJ[m]}1≤j≤JStoring the (J +1) layer hierarchy number layer values; matrix AjAnd storing the j-th layer associated domain layer value. As shown in fig. 2.
The method comprises the following specific steps:
(1) initialization j is 1, { dj[m],aJ[m]}1≤j≤J=0,Aj=0,m=n=1;
(2)Pm,nAs a center, a data matrix BP is selected3×3
(3) A binarization matrix BP: noting that the mean value of the matrix BP is Av, then
Figure GDA0002670694360000041
Wherein the positive integer x, y is equal to [1,3 ]];
(4) Multiplying BP with corresponding data in line detection template matrixes OB _ N45, OB _0 and OB _ P45 respectively, and accumulating the multiplied results to obtain absolute values T1, T2 and T3; and the points to be matched corresponding to max { T1, T2 and T3} are matching points.
(5) Calculating a point Pm,nProcess the associated domain layer coefficient Aj(m, n) and coefficient layer coefficients { dj[P],aJ[P]}1≤j≤J. Specifically, the following formula is used to obtain the point Q, which is the best match point for the point P:
correlation domain layer series:
Aj[P]=Q-P
the coefficient layer coefficient values are updated by the following formula:
Figure GDA0002670694360000042
Figure GDA0002670694360000043
Figure GDA0002670694360000044
Figure GDA0002670694360000045
if M is less than M, M is M + 1;
otherwise
If N < N
N is N + 1;
m=1;
otherwise, m is 1; n is 1;
j=j+1;
if J is larger than J, ending the circulation to obtain the associated domain layer coefficient AjCoefficient of sum layer system { dj[m],aJ[m]}1≤j≤J
Otherwise, jumping to the step (2).
In the example, when tested in the Matlab R2014a environment, the correlation domain layer coefficient calculated by the line detection algorithm provided by the invention can show the trend of the geometric flow in the original image as same as the Block Matching algorithm. It can be clearly found from the comparison data of fig. 3 that the time required by the line detection operator Matching algorithm provided by the invention is obviously shorter than the time required by using the Block Matching algorithm on the premise that the coefficient layer series sparsity is not obviously lost and the original image can be perfectly reconstructed.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (2)

1. A method for calculating a close frame group associated domain is characterized by comprising the following steps:
1) converting an input image into a gray-scale image;
2) decomposing the gray level graph group, calculating the layer coefficient of the jth layer, calculating the optimal matching point through a line detection operator, and calculating the layer coefficient of the jth layer associated domain, wherein J is more than or equal to 1 and less than or equal to J; a 3x3 matrix used by the line detection operator template;
the formula for calculating the optimal matching point by the line detection operator is as follows:
Figure FDA0002684533490000011
wherein
Figure FDA0002684533490000012
So as to make
Figure FDA0002684533490000013
A 3 × 3 image data matrix after central binarization; OP (optical fiber)i[j]Is a line detection operator that detects the direction i, i represents the set { +45 °,-one of 0 °, -45 ° }; j represents the jth position of the corresponding matrix;
3) and repeating the step 2) until the coefficients of the J layers are calculated, wherein J is a set value.
2. The method for computing a close-frame Grouplet associated domain according to claim 1, wherein the step of computing the optimal matching point by the line detection operator comprises:
1) at the j-th layer, in dots
Figure FDA0002684533490000014
As the center, a data matrix BP with the size of 3 multiplied by 3 and formed by pixel values is selected,
2) binarizing the data matrix BP3×3Memory matrix BP3×3When the average value of (B) is Av, then when BP isx,yBP being less than said mean value Avx,yThe value is 0, otherwise the value is 1,
3) calculating a formula solution point for the optimal matching point using the line detection operator
Figure FDA0002684533490000015
The optimum matching point m of (a) is,
4) computing associative domain layer coefficients
Figure FDA0002684533490000016
Coefficient of sum layer system { dj[m],aJ[m]}1≤j≤J
Correlation domain layer series:
Figure FDA0002684533490000017
as J increases from 1 to J, the coefficient layer coefficient values are updated by the following equation:
Figure FDA0002684533490000018
Figure FDA0002684533490000019
Figure FDA00026845334900000110
Figure FDA00026845334900000111
wherein, a [ m ]]Denotes the mean coefficient at point m, when j is 1, a [ m [ ]]Is the pixel value at point m; dj[m]Represents the difference coefficient at point m; s [ m ]]Denotes the size of the support frame at point m, when j is 1, s [ m ]]When J is J, a is 1J[m]By the formula
Figure FDA0002684533490000021
Calculating to obtain;
5) and repeating the steps until all the points from 1 to J layers are matched.
CN201710356482.8A 2017-05-19 2017-05-19 Close frame group association domain calculation method Active CN107168930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710356482.8A CN107168930B (en) 2017-05-19 2017-05-19 Close frame group association domain calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710356482.8A CN107168930B (en) 2017-05-19 2017-05-19 Close frame group association domain calculation method

Publications (2)

Publication Number Publication Date
CN107168930A CN107168930A (en) 2017-09-15
CN107168930B true CN107168930B (en) 2020-11-17

Family

ID=59815683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710356482.8A Active CN107168930B (en) 2017-05-19 2017-05-19 Close frame group association domain calculation method

Country Status (1)

Country Link
CN (1) CN107168930B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699852B2 (en) * 2011-10-10 2014-04-15 Intellectual Ventures Fund 83 Llc Video concept classification using video similarity scores
CN104240201A (en) * 2014-09-04 2014-12-24 南昌航空大学 Fracture image denoising and enhancing method based on group-contour wavelet transformation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699852B2 (en) * 2011-10-10 2014-04-15 Intellectual Ventures Fund 83 Llc Video concept classification using video similarity scores
CN104240201A (en) * 2014-09-04 2014-12-24 南昌航空大学 Fracture image denoising and enhancing method based on group-contour wavelet transformation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A grouplet-based reduced reference image quality assessment;Aldo Maalouf 等;《2009 International Workshop on Quality of Multimedia Experience》;20090731;第59-63页 *
Grouplet-based color image super-resolution;Aldo Maalouf 等;《2009 17th European Signal Processing Conference》;20090828;第25-29页 *
基于Grouplet变换的金属断口图像处理方法研究;周志宇;《中国优秀硕士学位论文全文数据库信息科技辑》;20140415(第4期);第I138-1030页 *
基于Grouplet熵与关联向量机的断口图像识别方法研究;孙熠 等;《失效分析与预防》;20150228;第10卷(第1期);第1-5页 *
智能交通系统中车牌定位问题的研究;张玉庆;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》;20050315(第1期);第C034-305页 *

Also Published As

Publication number Publication date
CN107168930A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
Tsai et al. A light-weight neural network for wafer map classification based on data augmentation
US20200126205A1 (en) Image processing method, image processing apparatus, computing device and computer-readable storage medium
CN112464003B (en) Image classification method and related device
JP6275260B2 (en) A method for processing an input low resolution (LR) image into an output high resolution (HR) image
CN112132149B (en) Semantic segmentation method and device for remote sensing image
CN107578455B (en) Arbitrary dimension sample texture synthetic method based on convolutional neural networks
CN109993702B (en) Full-text image super-resolution reconstruction method based on generation countermeasure network
CN104317902A (en) Image retrieval method based on local locality preserving iterative quantization hash
Haber et al. Never look back-A modified EnKF method and its application to the training of neural networks without back propagation
CN109858531B (en) Hyperspectral remote sensing image fast clustering algorithm based on graph
CN112257727B (en) Feature image extraction method based on deep learning self-adaptive deformable convolution
Yoo et al. Fast training of convolutional neural network classifiers through extreme learning machines
Tao et al. Pooling operations in deep learning: from “invariable” to “variable”
CN110809126A (en) Video frame interpolation method and system based on adaptive deformable convolution
Zhu et al. Generative high-capacity image hiding based on residual CNN in wavelet domain
Zhang et al. NHNet: A non‐local hierarchical network for image denoising
CN116797456A (en) Image super-resolution reconstruction method, system, device and storage medium
CN111339734B (en) Method for generating image based on text
CN107168930B (en) Close frame group association domain calculation method
CN117132650A (en) Category-level 6D object pose estimation method based on point cloud image attention network
CN110020986B (en) Single-frame image super-resolution reconstruction method based on Euclidean subspace group double-remapping
CN111861878A (en) Optimizing supervised generation countermeasure networks through latent spatial regularization
CN115239967A (en) Image generation method and device for generating countermeasure network based on Trans-CSN
CN111666849A (en) Multi-source remote sensing image water body detection method based on multi-view depth network iterative evolution
Li et al. A research and strategy of remote sensing image denoising algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant